Sample records for canonical ensemble method

  1. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed

    Resat, H; Mezei, M

    1996-09-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.

  2. Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.

    PubMed Central

    Resat, H; Mezei, M

    1996-01-01

    The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992

  3. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  4. Force-momentum-based self-guided Langevin dynamics: A rapid sampling method that approaches the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-11-01

    The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.

  5. Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murch, G.E.; Thorn, R.J.

    1978-11-01

    A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less

  6. Generalized canonical ensembles and ensemble equivalence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costeniuc, M.; Ellis, R.S.; Turkington, B.

    2006-02-15

    This paper is a companion piece to our previous work [J. Stat. Phys. 119, 1283 (2005)], which introduced a generalized canonical ensemble obtained by multiplying the usual Boltzmann weight factor e{sup -{beta}}{sup H} of the canonical ensemble with an exponential factor involving a continuous function g of the Hamiltonian H. We provide here a simplified introduction to our previous work, focusing now on a number of physical rather than mathematical aspects of the generalized canonical ensemble. The main result discussed is that, for suitable choices of g, the generalized canonical ensemble reproduces, in the thermodynamic limit, all the microcanonical equilibriummore » properties of the many-body system represented by H even if this system has a nonconcave microcanonical entropy function. This is something that in general the standard (g=0) canonical ensemble cannot achieve. Thus a virtue of the generalized canonical ensemble is that it can often be made equivalent to the microcanonical ensemble in cases in which the canonical ensemble cannot. The case of quadratic g functions is discussed in detail; it leads to the so-called Gaussian ensemble.« less

  7. Simulation of gas adsorption on a surface and in slit pores with grand canonical and canonical kinetic Monte Carlo methods.

    PubMed

    Ustinov, E A; Do, D D

    2012-08-21

    We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.

  8. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  9. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Seth, E-mail: seth.olsen@uq.edu.au

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less

  10. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: charge-bond resonance in monomethine cyanines.

    PubMed

    Olsen, Seth

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.

  11. Reciprocity in directed networks

    NASA Astrophysics Data System (ADS)

    Yin, Mei; Zhu, Lingjiong

    2016-04-01

    Reciprocity is an important characteristic of directed networks and has been widely used in the modeling of World Wide Web, email, social, and other complex networks. In this paper, we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble, the canonical ensemble, and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities. The sparse case is also studied for the grand canonical ensemble. Extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed.

  12. A New Ensemble Canonical Correlation Prediction Scheme for Seasonal Precipitation

    NASA Technical Reports Server (NTRS)

    Kim, Kyu-Myong; Lau, William K. M.; Li, Guilong; Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Department of Mathematical Sciences, University of Alberta, Edmonton, Canada This paper describes the fundamental theory of the ensemble canonical correlation (ECC) algorithm for the seasonal climate forecasting. The algorithm is a statistical regression sch eme based on maximal correlation between the predictor and predictand. The prediction error is estimated by a spectral method using the basis of empirical orthogonal functions. The ECC algorithm treats the predictors and predictands as continuous fields and is an improvement from the traditional canonical correlation prediction. The improvements include the use of area-factor, estimation of prediction error, and the optimal ensemble of multiple forecasts. The ECC is applied to the seasonal forecasting over various parts of the world. The example presented here is for the North America precipitation. The predictor is the sea surface temperature (SST) from different ocean basins. The Climate Prediction Center's reconstructed SST (1951-1999) is used as the predictor's historical data. The optimally interpolated global monthly precipitation is used as the predictand?s historical data. Our forecast experiments show that the ECC algorithm renders very high skill and the optimal ensemble is very important to the high value.

  13. Geometric integrator for simulations in the canonical ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx; Sanders, David P., E-mail: dpsanders@ciencias.unam.mx; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservationmore » of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.« less

  14. Critical behaviors and phase transitions of black holes in higher order gravities and extended phase spaces

    NASA Astrophysics Data System (ADS)

    Sherkatghanad, Zeinab; Mirza, Behrouz; Mirzaiyan, Zahra; Mansoori, Seyed Ali Hosseini

    We consider the critical behaviors and phase transitions of Gauss-Bonnet-Born-Infeld-AdS black holes (GB-BI-AdS) for d = 5, 6 and the extended phase space. We assume the cosmological constant, Λ, the coupling coefficient α, and the BI parameter β to be thermodynamic pressures of the system. Having made these assumptions, the critical behaviors are then studied in the two canonical and grand canonical ensembles. We find “reentrant and triple point phase transitions” (RPT-TP) and “multiple reentrant phase transitions” (multiple RPT) with increasing pressure of the system for specific values of the coupling coefficient α in the canonical ensemble. Also, we observe a reentrant phase transition (RPT) of GB-BI-AdS black holes in the grand canonical ensemble and for d = 6. These calculations are then expanded to the critical behavior of Born-Infeld-AdS (BI-AdS) black holes in the third-order of Lovelock gravity and in the grand canonical ensemble to find a van der Waals (vdW) behavior for d = 7 and a RPT for d = 8 for specific values of potential ϕ in the grand canonical ensemble. Furthermore, we obtain a similar behavior for the limit of β →∞, i.e. charged-AdS black holes in the third-order of the Lovelock gravity. Thus, it is shown that the critical behaviors of these black holes are independent of the parameter β in the grand canonical ensemble.

  15. Momentum distribution functions in ensembles: the inequivalence of microcannonical and canonical ensembles in a finite ultracold system.

    PubMed

    Wang, Pei; Xianlong, Gao; Li, Haibin

    2013-08-01

    It is demonstrated in many thermodynamic textbooks that the equivalence of the different ensembles is achieved in the thermodynamic limit. In this present work we discuss the inequivalence of microcanonical and canonical ensembles in a finite ultracold system at low energies. We calculate the microcanonical momentum distribution function (MDF) in a system of identical fermions (bosons). We find that the microcanonical MDF deviates from the canonical one, which is the Fermi-Dirac (Bose-Einstein) function, in a finite system at low energies where the single-particle density of states and its inverse are finite.

  16. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  17. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry.

    PubMed

    Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A

    2017-03-21

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  18. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE PAGES

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    2017-03-16

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  19. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  20. Nonuniform fluids in the grand canonical ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, J.K.

    1982-01-01

    Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less

  1. Thermodynamic-ensemble independence of solvation free energy.

    PubMed

    Chong, Song-Ho; Ham, Sihyun

    2015-02-10

    Solvation free energy is the fundamental thermodynamic quantity in solution chemistry. Recently, it has been suggested that the partial molar volume correction is necessary to convert the solvation free energy determined in different thermodynamic ensembles. Here, we demonstrate ensemble-independence of the solvation free energy on general thermodynamic grounds. Theoretical estimates of the solvation free energy based on the canonical or grand-canonical ensemble are pertinent to experiments carried out under constant pressure without any conversion.

  2. Canonical ensemble ground state and correlation entropy of Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Svidzinsky, Anatoly; Kim, Moochan; Agarwal, Girish; Scully, Marlan O.

    2018-01-01

    Constraint of a fixed total number of particles yields a correlation between the fluctuation of particles in different states in the canonical ensemble. Here we show that, below the temperature of Bose-Einstein condensation (BEC), the correlation part of the entropy of an ideal Bose gas is cancelled by the ground-state contribution. Thus, in the BEC region, the thermodynamic properties of the gas in the canonical ensemble can be described accurately in a simplified model which excludes the ground state and assumes no correlation between excited levels.

  3. Generalized ensemble theory with non-extensive statistics

    NASA Astrophysics Data System (ADS)

    Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke

    2017-12-01

    The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.

  4. Statistical mechanics of few-particle systems: exact results for two useful models

    NASA Astrophysics Data System (ADS)

    Miranda, Enrique N.

    2017-11-01

    The statistical mechanics of small clusters (n ˜ 10-50 elements) of harmonic oscillators and two-level systems is studied exactly, following the microcanonical, canonical and grand canonical formalisms. For clusters with several hundred particles, the results from the three formalisms coincide with those found in the thermodynamic limit. However, for clusters formed by a few tens of elements, the three ensembles yield different results. For a cluster with a few tens of harmonic oscillators, when the heat capacity per oscillator is evaluated within the canonical formalism, it reaches a limit value equal to k B , as in the thermodynamic case, while within the microcanonical formalism the limit value is k B (1-1/n). This difference could be measured experimentally. For a cluster with a few tens of two-level systems, the heat capacity evaluated within the canonical and microcanonical ensembles also presents differences that could be detected experimentally. Both the microcanonical and grand canonical formalism show that the entropy is non-additive for systems this small, while the canonical ensemble reaches the opposite conclusion. These results suggest that the microcanonical ensemble is the most appropriate for dealing with systems with tens of particles.

  5. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  6. Statistical hadronization and microcanonical ensemble

    DOE PAGES

    Becattini, F.; Ferroni, L.

    2004-01-01

    We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less

  7. Phase transition and thermodynamic geometry of f (R ) AdS black holes in the grand canonical ensemble

    NASA Astrophysics Data System (ADS)

    Li, Gu-Qiang; Mo, Jie-Xiong

    2016-06-01

    The phase transition of a four-dimensional charged AdS black hole solution in the R +f (R ) gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in the canonical ensemble. There exists no critical point for T -S curve while in former research critical point was found for both the T -S curve and T -r+ curve when the electric charge of f (R ) black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of f (R ) AdS black hole is fixed. The specific heat CΦ encounters a divergence when 0 <Φ b . This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat CQ . To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and derive the analytic expressions for both the Weinhold scalar curvature and Ruppeiner scalar curvature. It is shown that they diverge exactly where the specific heat CΦ diverges.

  8. Quantum canonical ensemble: A projection operator approach

    NASA Astrophysics Data System (ADS)

    Magnus, Wim; Lemmens, Lucien; Brosens, Fons

    2017-09-01

    Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.

  9. Canonical partition functions: ideal quantum gases, interacting classical gases, and interacting quantum gases

    NASA Astrophysics Data System (ADS)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-02-01

    In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.

  10. Enabling grand-canonical Monte Carlo: extending the flexibility of GROMACS through the GromPy python interface module.

    PubMed

    Pool, René; Heringa, Jaap; Hoefling, Martin; Schulz, Roland; Smith, Jeremy C; Feenstra, K Anton

    2012-05-05

    We report on a python interface to the GROMACS molecular simulation package, GromPy (available at https://github.com/GromPy). This application programming interface (API) uses the ctypes python module that allows function calls to shared libraries, for example, written in C. To the best of our knowledge, this is the first reported interface to the GROMACS library that uses direct library calls. GromPy can be used for extending the current GROMACS simulation and analysis modes. In this work, we demonstrate that the interface enables hybrid Monte-Carlo/molecular dynamics (MD) simulations in the grand-canonical ensemble, a simulation mode that is currently not implemented in GROMACS. For this application, the interplay between GromPy and GROMACS requires only minor modifications of the GROMACS source code, not affecting the operation, efficiency, and performance of the GROMACS applications. We validate the grand-canonical application against MD in the canonical ensemble by comparison of equations of state. The results of the grand-canonical simulations are in complete agreement with MD in the canonical ensemble. The python overhead of the grand-canonical scheme is only minimal. Copyright © 2012 Wiley Periodicals, Inc.

  11. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    PubMed

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  12. Toward canonical ensemble distribution from self-guided Langevin dynamics simulation

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-04-01

    This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.

  13. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    2018-06-01

    We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.

  14. On the relativistic micro-canonical ensemble and relativistic kinetic theory for N relativistic particles in inertial and non-inertial rest frames

    NASA Astrophysics Data System (ADS)

    Alba, David; Crater, Horace W.; Lusanna, Luca

    2015-03-01

    A new formulation of relativistic classical mechanics allows a reconsideration of old unsolved problems in relativistic kinetic theory and in relativistic statistical mechanics. In particular a definition of the relativistic micro-canonical partition function is given strictly in terms of the Poincaré generators of an interacting N-particle system both in the inertial and non-inertial rest frames. The non-relativistic limit allows a definition of both the inertial and non-inertial micro-canonical ensemble in terms of the Galilei generators.

  15. Microcanonical-ensemble computer simulation of the high-temperature expansion coefficients of the Helmholtz free energy of a square-well fluid

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro

    2018-02-01

    The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.

  16. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  17. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  18. Finite temperature grand canonical ensemble study of the minimum electrophilicity principle.

    PubMed

    Miranda-Quintana, Ramón Alain; Chattaraj, Pratim K; Ayers, Paul W

    2017-09-28

    We analyze the minimum electrophilicity principle of conceptual density functional theory using the framework of the finite temperature grand canonical ensemble. We provide support for this principle, both for the cases of systems evolving from a non-equilibrium to an equilibrium state and for the change from one equilibrium state to another. In doing so, we clearly delineate the cases where this principle can, or cannot, be used.

  19. Relaxation in a two-body Fermi-Pasta-Ulam system in the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Sen, Surajit; Barrett, Tyler

    The study of the dynamics of the Fermi-Pasta-Ulam (FPU) chain remains a challenging problem. Inspired by the recent work of Onorato et al. on thermalization in the FPU system, we report a study of relaxation processes in a two-body FPU system in the canonical ensemble. The studies have been carried out using the Recurrence Relations Method introduced by Zwanzig, Mori, Lee and others. We have obtained exact analytical expressions for the first thirteen levels of the continued fraction representation of the Laplace transformed velocity autocorrelation function of the system. Using simple and reasonable extrapolation schemes and known limits we are able to estimate the relaxation behavior of the oscillators in the two-body FPU system and recover the expected behavior in the harmonic limit. Generalizations of the calculations to larger systems will be discussed.

  20. Maxwell's Demon at work: Two types of Bose condensate fluctuations in power-law traps.

    PubMed

    Grossmann, S; Holthaus, M

    1997-11-10

    After discussing the idea underlying the Maxwell's Demon ensemble, we employ this ensemble for calculating fluctuations of ideal Bose gas condensates in traps with power-law single-particle energy spectra. Two essentially different cases have to be distinguished. If the heat capacity is continuous at the condensation point, the fluctuations of the number of condensate particles vanish linearly with temperature, independent of the trap characteristics. In this case, microcanonical and canonical fluctuations are practically indistinguishable. If the heat capacity is discontinuous, the fluctuations vanish algebraically with temperature, with an exponent determined by the trap, and the micro-canonical fluctuations are lower than their canonical counterparts.

  1. Breaking of Ensemble Equivalence in Networks

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego

    2015-12-01

    It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.

  2. Long-range interacting systems in the unconstrained ensemble.

    PubMed

    Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano

    2017-01-01

    Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.

  3. Maxwell's equal area law for black holes in power Maxwell invariant

    NASA Astrophysics Data System (ADS)

    Li, Huai-Fan; Guo, Xiong-ying; Zhao, Hui-Hua; Zhao, Ren

    2017-08-01

    In this paper, we consider the phase transition of black hole in power Maxwell invariant by means of Maxwell's equal area law. First, we review and study the analogy of nonlinear charged black hole solutions with the Van der Waals gas-liquid system in the extended phase space, and obtain isothermal P- v diagram. Then, using the Maxwell's equal area law we study the phase transition of AdS black hole with different temperatures. Finally, we extend the method to the black hole in the canonical (grand canonical) ensemble in which charge (potential) is fixed at infinity. Interestingly, we find the phase transition occurs in the both ensembles. We also study the effect of the parameters of the black hole on the two-phase coexistence. The results show that the black hole may go through a small-large phase transition similar to those of usual non-gravity thermodynamic systems.

  4. Relation Between Pore Size and the Compressibility of a Confined Fluid

    PubMed Central

    Gor, Gennady Y.; Siderius, Daniel W.; Rasmussen, Christopher J.; Krekelberg, William P.; Shen, Vincent K.; Bernstein, Noam

    2015-01-01

    When a fluid is confined to a nanopore, its thermodynamic properties differ from the properties of a bulk fluid, so measuring such properties of the confined fluid can provide information about the pore sizes. Here we report a simple relation between the pore size and isothermal compressibility of argon confined in these pores. Compressibility is calculated from the fluctuations of the number of particles in the grand canonical ensemble using two different simulation techniques: conventional grand-canonical Monte Carlo and grand-canonical ensemble transition-matrix Monte Carlo. Our results provide a theoretical framework for extracting the information on the pore sizes of fluid-saturated samples by measuring the compressibility from ultrasonic experiments. PMID:26590541

  5. Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique

    NASA Astrophysics Data System (ADS)

    Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.

    2015-11-01

    Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.

  6. Enhanced Sampling in the Well-Tempered Ensemble

    NASA Astrophysics Data System (ADS)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  7. Enhanced sampling in the well-tempered ensemble.

    PubMed

    Bonomi, M; Parrinello, M

    2010-05-14

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  8. Thermodynamics of pairing in mesoscopic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sumaryada, Tony; Volya, Alexander

    Using numerical and analytical methods implemented for different models, we conduct a systematic study of the thermodynamic properties of pairing correlations in mesoscopic nuclear systems. Various quantities are calculated and analyzed using the exact solution of pairing. An in-depth comparison of canonical, grand canonical, and microcanonical ensembles is conducted. The nature of the pairing phase transition in a small system is of a particular interest. We discuss the onset of discontinuity in the thermodynamic variables, fluctuations, and evolution of zeros of the canonical and grand canonical partition functions in the complex plane. The behavior of the invariant correlational entropy ismore » also studied in the transitional region of interest. The change in the character of the phase transition due to the presence of a magnetic field is discussed along with studies of superconducting thermodynamics.« less

  9. Molecular dynamics coupled with a virtual system for effective conformational sampling.

    PubMed

    Hayami, Tomonori; Kasahara, Kota; Nakamura, Haruki; Higo, Junichi

    2018-07-15

    An enhanced conformational sampling method is proposed: virtual-system coupled canonical molecular dynamics (VcMD). Although VcMD enhances sampling along a reaction coordinate, this method is free from estimation of a canonical distribution function along the reaction coordinate. This method introduces a virtual system that does not necessarily obey a physical law. To enhance sampling the virtual system couples with a molecular system to be studied. Resultant snapshots produce a canonical ensemble. This method was applied to a system consisting of two short peptides in an explicit solvent. Conventional molecular dynamics simulation, which is ten times longer than VcMD, was performed along with adaptive umbrella sampling. Free-energy landscapes computed from the three simulations mutually converged well. The VcMD provided quicker association/dissociation motions of peptides than the conventional molecular dynamics did. The VcMD method is applicable to various complicated systems because of its methodological simplicity. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  10. Biased Metropolis Sampling for Rugged Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2003-11-01

    Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.

  11. Generalized ensemble method applied to study systems with strong first order transitions

    DOE PAGES

    Malolepsza, E.; Kim, J.; Keyes, T.

    2015-09-28

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub, where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM).more » This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. Lastly, the method is illustrated in a study of the very strong solid/liquid transition in water.« less

  12. Generalized ensemble method applied to study systems with strong first order transitions

    NASA Astrophysics Data System (ADS)

    Małolepsza, E.; Kim, J.; Keyes, T.

    2015-09-01

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.

  13. Nonanalytic microscopic phase transitions and temperature oscillations in the microcanonical ensemble: An exactly solvable one-dimensional model for evaporation

    NASA Astrophysics Data System (ADS)

    Hilbert, Stefan; Dunkel, Jörn

    2006-07-01

    We calculate exactly both the microcanonical and canonical thermodynamic functions (TDFs) for a one-dimensional model system with piecewise constant Lennard-Jones type pair interactions. In the case of an isolated N -particle system, the microcanonical TDFs exhibit (N-1) singular (nonanalytic) microscopic phase transitions of the formal order N/2 , separating N energetically different evaporation (dissociation) states. In a suitably designed evaporation experiment, these types of phase transitions should manifest themselves in the form of pressure and temperature oscillations, indicating cooling by evaporation. In the presence of a heat bath (thermostat), such oscillations are absent, but the canonical heat capacity shows a characteristic peak, indicating the temperature-induced dissociation of the one-dimensional chain. The distribution of complex zeros of the canonical partition may be used to identify different degrees of dissociation in the canonical ensemble.

  14. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    NASA Astrophysics Data System (ADS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  15. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less

  16. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less

  17. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    DOE PAGES

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; ...

    2018-04-20

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less

  18. Improved Monte Carlo Scheme for Efficient Particle Transfer in Heterogeneous Systems in the Grand Canonical Ensemble: Application to Vapor-Liquid Nucleation.

    PubMed

    Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin

    2015-09-08

    Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.

  19. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, M.; Edwards, H. C.; Hu, J.

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  20. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE PAGES

    D'Elia, M.; Edwards, H. C.; Hu, J.; ...

    2018-01-18

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  1. The forces on a single interacting Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Thu, Nguyen Van

    2018-04-01

    Using double parabola approximation for a single Bose-Einstein condensate confined between double slabs we proved that in grand canonical ensemble (GCE) the ground state with Robin boundary condition (BC) is favored, whereas in canonical ensemble (CE) our system undergoes from ground state with Robin BC to the one with Dirichlet BC in small-L region and vice versa for large-L region and phase transition in space of the ground state is the first order. The surface tension force and Casimir force are also considered in both CE and GCE in detail.

  2. Black hole evaporation in conformal gravity

    NASA Astrophysics Data System (ADS)

    Bambi, Cosimo; Modesto, Leonardo; Porey, Shiladitya; Rachwał, Lesław

    2017-09-01

    We study the formation and the evaporation of a spherically symmetric black hole in conformal gravity. From the collapse of a spherically symmetric thin shell of radiation, we find a singularity-free non-rotating black hole. This black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble. We consider the analysis both in the canonical and in the micro-canonical statistical ensembles. Last, we discuss the corresponding Penrose diagram of this physical process.

  3. Black hole evaporation in conformal gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambi, Cosimo; Rachwał, Lesław; Modesto, Leonardo

    We study the formation and the evaporation of a spherically symmetric black hole in conformal gravity. From the collapse of a spherically symmetric thin shell of radiation, we find a singularity-free non-rotating black hole. This black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble. We consider the analysis both in the canonical and in the micro-canonical statistical ensembles. Last, we discuss the corresponding Penrose diagram of this physical process.

  4. Green-Kubo relations for the viscosity of biaxial nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Sarman, Sten

    1996-09-01

    We derive Green-Kubo relations for the viscosities of a biaxial nematic liquid crystal. In this system there are seven shear viscosities, three twist viscosities, and three cross coupling coefficients between the antisymmetric strain rate and the symmetric traceless pressure tensor. According to the Onsager reciprocity relations these couplings are equal to the cross couplings between the symmetric traceless strain rate and the antisymmetric pressure. Our method is based on a comparison of the microscopic linear response generated by the SLLOD equations of motion for planar Couette flow (so named because of their close connection to the Doll's tensor Hamiltonian) and the macroscopic linear phenomenological relations between the pressure tensor and the strain rate. In order to obtain simple Green-Kubo relations we employ an equilibrium ensemble where the angular velocities of the directors are identically zero. This is achieved by adding constraint torques to the equations for the molecular angular accelerations. One finds that all the viscosity coefficients can be expressed as linear combinations of time correlation function integrals (TCFIs). This is much simpler compared to the expressions in the conventional canonical ensemble, where the viscosities are complicated rational functions of the TCFIs. The reason for this is, that in the constrained angular velocity ensemble, the thermodynamic forces are given external parameters whereas the thermodynamic fluxes are ensemble averages of phase functions. This is not the case in the canonical ensemble. The simplest way of obtaining numerical estimates of viscosity coefficients of a particular molecular model system is to evaluate these fluctuation relations by equilibrium molecular dynamics simulations.

  5. Liquid Water from First Principles: Validation of Different Sampling Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mundy, C J; Kuo, W; Siepmann, J

    2004-05-20

    A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to assess the validity and reproducibility of different sampling approaches. These simulations include Car-Parrinello molecular dynamics simulations using the program CPMD with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born-Oppenheimer molecular dynamics using the programs CPMD and CP2K in the microcanonical ensemble, and Metropolis Monte Carlo using CP2K in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. It is foundmore » that the structural and thermodynamic properties of these simulations are in excellent agreement with each other as long as adiabatic sampling is maintained in the Car-Parrinello molecular dynamics simulations either by choosing a sufficiently small fictitious mass in the microcanonical ensemble or by Nos{acute e}-Hoover thermostats in the canonical ensemble. Using the Becke-Lee-Yang-Parr exchange and correlation energy functionals and norm-conserving Troullier-Martins or Goedecker-Teter-Hutter pseudopotentials, simulations at a fixed density of 1.0 g/cm{sup 3} and a temperature close to 315 K yield a height of the first peak in the oxygen-oxygen radial distribution function of about 3.0, a classical constant-volume heat capacity of about 70 J K{sup -1} mol{sup -1}, and a self-diffusion constant of about 0.1 Angstroms{sup 2}/ps.« less

  6. Structural, electronic, and dynamical properties of liquid water by ab initio molecular dynamics based on SCAN functional within the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Zheng, Lixin; Chen, Mohan; Sun, Zhaoru; Ko, Hsin-Yu; Santra, Biswajit; Dhuvad, Pratikkumar; Wu, Xifan

    2018-04-01

    We perform ab initio molecular dynamics (AIMD) simulation of liquid water in the canonical ensemble at ambient conditions using the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient approximation (GGA) functional approximation and carry out systematic comparisons with the results obtained from the GGA-level Perdew-Burke-Ernzerhof (PBE) functional and Tkatchenko-Scheffler van der Waals (vdW) dispersion correction inclusive PBE functional. We analyze various properties of liquid water including radial distribution functions, oxygen-oxygen-oxygen triplet angular distribution, tetrahedrality, hydrogen bonds, diffusion coefficients, ring statistics, density of states, band gaps, and dipole moments. We find that the SCAN functional is generally more accurate than the other two functionals for liquid water by not only capturing the intermediate-range vdW interactions but also mitigating the overly strong hydrogen bonds prescribed in PBE simulations. We also compare the results of SCAN-based AIMD simulations in the canonical and isothermal-isobaric ensembles. Our results suggest that SCAN provides a reliable description for most structural, electronic, and dynamical properties in liquid water.

  7. Rényi entropy, abundance distribution, and the equivalence of ensembles.

    PubMed

    Mora, Thierry; Walczak, Aleksandra M

    2016-05-01

    Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.

  8. Multivariable extrapolation of grand canonical free energy landscapes

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  9. Ergodicity of a singly-thermostated harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Hoover, William Graham; Sprott, Julien Clinton; Hoover, Carol Griswold

    2016-03-01

    Although Nosé's thermostated mechanics is formally consistent with Gibbs' canonical ensemble, the thermostated Nosé-Hoover (harmonic) oscillator, with its mean kinetic temperature controlled, is far from ergodic. Much of its phase space is occupied by regular conservative tori. Oscillator ergodicity has previously been achieved by controlling two oscillator moments with two thermostat variables. Here we use computerized searches in conjunction with visualization to find singly-thermostated motion equations for the oscillator which are consistent with Gibbs' canonical distribution. Such models are the simplest able to bridge the gap between Gibbs' statistical ensembles and Newtonian single-particle dynamics.

  10. The Ensemble Canon

    NASA Technical Reports Server (NTRS)

    MIittman, David S

    2011-01-01

    Ensemble is an open architecture for the development, integration, and deployment of mission operations software. Fundamentally, it is an adaptation of the Eclipse Rich Client Platform (RCP), a widespread, stable, and supported framework for component-based application development. By capitalizing on the maturity and availability of the Eclipse RCP, Ensemble offers a low-risk, politically neutral path towards a tighter integration of operations tools. The Ensemble project is a highly successful, ongoing collaboration among NASA Centers. Since 2004, the Ensemble project has supported the development of mission operations software for NASA's Exploration Systems, Science, and Space Operations Directorates.

  11. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures.

    PubMed

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)PRBMDO0163-182910.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)PRLTAO0031-900710.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S=1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  12. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    NASA Astrophysics Data System (ADS)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003), 10.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013), 10.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S =1 /2 , we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  13. Enhancing the understanding of entropy through computation

    NASA Astrophysics Data System (ADS)

    Salagaram, Trisha; Chetty, Nithaya

    2011-11-01

    We devise an algorithm to enumerate the microstates of a system comprising N independent, distinguishable particles. The algorithm is applicable to a wide class of systems such as harmonic oscillators, free particles, spins, and other models for which there are no analytical solutions, for example, a system with single particle energy spectrum given by ɛ(p,q) = ɛ0(p2 + q4), where p and q are non-negative integers. Our algorithm enables us to determine the approach to the limit N → ∞ within the microcanonical ensemble, and makes manifest the equivalence with the canonical ensemble. Various thermodynamic quantities as a function of N can be computed using our methods.

  14. Statistical mechanics of Fermi-Pasta-Ulam chains with the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Demirel, Melik C.; Sayar, Mehmet; Atılgan, Ali R.

    1997-03-01

    Low-energy vibrations of a Fermi-Pasta-Ulam-Β (FPU-Β) chain with 16 repeat units are analyzed with the aid of numerical experiments and the statistical mechanics equations of the canonical ensemble. Constant temperature numerical integrations are performed by employing the cubic coupling scheme of Kusnezov et al. [Ann. Phys. 204, 155 (1990)]. Very good agreement is obtained between numerical results and theoretical predictions for the probability distributions of the generalized coordinates and momenta both of the chain and of the thermal bath. It is also shown that the average energy of the chain scales linearly with the bath temperature.

  15. Microcanonical fluctuations of the condensate in weakly interacting Bose gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Zbigniew

    2005-05-15

    We study fluctuations of the number of Bose condensed atoms in a weakly interacting homogeneous and trapped gases. For a homogeneous system we apply the particle-number-conserving formulation of the Bogoliubov theory and calculate the condensate fluctuations within the canonical and the microcanonical ensembles. We demonstrate that, at least in the low-temperature regime, predictions of the particle-number-conserving and traditional, nonconserving theory are identical, and lead to the anomalous scaling of fluctuations. Furthermore, the microcanonical fluctuations differ from the canonical ones by a quantity which scales normally in the number of particles, thus predictions of both ensembles are equivalent in the thermodynamicmore » limit. We observe a similar behavior for a weakly interacting gas in a harmonic trap. This is in contrast to the trapped, ideal gas, where microcanonical and canonical fluctuations are different in the thermodynamic limit.« less

  16. Determination of the vapor-liquid transition of square-well particles using a novel generalized-canonical-ensemble-based method

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Xu, Shun; Tu, Yu-Song; Zhou, Xin

    2017-06-01

    Not Available Project supported by the National Natural Science Foundation for Outstanding Young Scholars, China (Grant No. 11422542), the National Natural Science Foundation of China (Grant Nos. 11605151 and 11675138), and the Shanghai Supercomputer Center of China and Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund (the second phase).

  17. Hyper-Parallel Tempering Monte Carlo Method and It's Applications

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; de Pablo, Juan

    2000-03-01

    A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.

  18. Statistical field theory with constraints: Application to critical Casimir forces in the canonical ensemble.

    PubMed

    Gross, Markus; Gambassi, Andrea; Dietrich, S

    2017-08-01

    The effect of imposing a constraint on a fluctuating scalar order parameter field in a system of finite volume is studied within statistical field theory. The canonical ensemble, corresponding to a fixed total integrated order parameter (e.g., the total number of particles), is obtained as a special case of the theory. A perturbative expansion is developed which allows one to systematically determine the constraint-induced finite-volume corrections to the free energy and to correlation functions. In particular, we focus on the Landau-Ginzburg model in a film geometry (i.e., in a rectangular parallelepiped with a small aspect ratio) with periodic, Dirichlet, or Neumann boundary conditions in the transverse direction and periodic boundary conditions in the remaining, lateral directions. Within the expansion in terms of ε=4-d, where d is the spatial dimension of the bulk, the finite-size contribution to the free energy of the confined system and the associated critical Casimir force are calculated to leading order in ε and are compared to the corresponding expressions for an unconstrained (grand canonical) system. The constraint restricts the fluctuations within the system and it accordingly modifies the residual finite-size free energy. The resulting critical Casimir force is shown to depend on whether it is defined by assuming a fixed transverse area or a fixed total volume. In the former case, the constraint is typically found to significantly enhance the attractive character of the force as compared to the grand canonical case. In contrast to the grand canonical Casimir force, which, for supercritical temperatures, vanishes in the limit of thick films, in the canonical case with fixed transverse area the critical Casimir force attains for thick films a negative value for all boundary conditions studied here. Typically, the dependence of the critical Casimir force both on the temperaturelike and on the fieldlike scaling variables is different in the two ensembles.

  19. Statistical field theory with constraints: Application to critical Casimir forces in the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Gross, Markus; Gambassi, Andrea; Dietrich, S.

    2017-08-01

    The effect of imposing a constraint on a fluctuating scalar order parameter field in a system of finite volume is studied within statistical field theory. The canonical ensemble, corresponding to a fixed total integrated order parameter (e.g., the total number of particles), is obtained as a special case of the theory. A perturbative expansion is developed which allows one to systematically determine the constraint-induced finite-volume corrections to the free energy and to correlation functions. In particular, we focus on the Landau-Ginzburg model in a film geometry (i.e., in a rectangular parallelepiped with a small aspect ratio) with periodic, Dirichlet, or Neumann boundary conditions in the transverse direction and periodic boundary conditions in the remaining, lateral directions. Within the expansion in terms of ɛ =4 -d , where d is the spatial dimension of the bulk, the finite-size contribution to the free energy of the confined system and the associated critical Casimir force are calculated to leading order in ɛ and are compared to the corresponding expressions for an unconstrained (grand canonical) system. The constraint restricts the fluctuations within the system and it accordingly modifies the residual finite-size free energy. The resulting critical Casimir force is shown to depend on whether it is defined by assuming a fixed transverse area or a fixed total volume. In the former case, the constraint is typically found to significantly enhance the attractive character of the force as compared to the grand canonical case. In contrast to the grand canonical Casimir force, which, for supercritical temperatures, vanishes in the limit of thick films, in the canonical case with fixed transverse area the critical Casimir force attains for thick films a negative value for all boundary conditions studied here. Typically, the dependence of the critical Casimir force both on the temperaturelike and on the fieldlike scaling variables is different in the two ensembles.

  20. Behavior of the Enthalpy of Adsorption in Nanoporous Materials Close to Saturation Conditions

    PubMed Central

    2017-01-01

    Many important industrial separation processes based on adsorption operate close to saturation. In this regime, the underlying adsorption processes are mostly driven by entropic forces. At equilibrium, the entropy of adsorption is closely related to the enthalpy of adsorption. Thus, studying the behavior of the enthalpy of adsorption as a function of loading is fundamental to understanding separation processes. Unfortunately, close to saturation, the enthalpy of adsorption is hard to measure experimentally and hard to compute in simulations. In simulations, the enthalpy of adsorption is usually obtained from energy/particle fluctuations in the grand-canonical ensemble, but this methodology is hampered by vanishing insertions/deletions at high loading. To investigate the fundamental behavior of the enthalpy and entropy of adsorption at high loading, we develop a simplistic model of adsorption in a channel and show that at saturation the enthalpy of adsorption diverges to large positive values due to repulsive intermolecular interactions. However, there are many systems that can avoid repulsive intermolecular interactions and hence do not show this drastic increase in enthalpy of adsorption close to saturation. We find that the conventional grand-canonical Monte Carlo method is incapable of determining the enthalpy of adsorption from energy/particle fluctuations at high loading. Here, we show that by using the continuous fractional component Monte Carlo, the enthalpy of adsorption close to saturation conditions can be reliably obtained from the energy/particle fluctuations in the grand-canonical ensemble. The best method to study properties at saturation is the NVT energy (local-) slope methodology. PMID:28521093

  1. A Formal Derivation of the Gibbs Entropy for Classical Systems Following the Schrodinger Quantum Mechanical Approach

    ERIC Educational Resources Information Center

    Santillan, M.; Zeron, E. S.; Del Rio-Correa, J. L.

    2008-01-01

    In the traditional statistical mechanics textbooks, the entropy concept is first introduced for the microcanonical ensemble and then extended to the canonical and grand-canonical cases. However, in the authors' experience, this procedure makes it difficult for the student to see the bigger picture and, although quite ingenuous, the subtleness of…

  2. Complete analysis of ensemble inequivalence in the Blume-Emery-Griffiths model

    NASA Astrophysics Data System (ADS)

    Hovhannisyan, V. V.; Ananikian, N. S.; Campa, A.; Ruffo, S.

    2017-12-01

    We study inequivalence of canonical and microcanonical ensembles in the mean-field Blume-Emery-Griffiths model. This generalizes previous results obtained for the Blume-Capel model. The phase diagram strongly depends on the value of the biquadratic exchange interaction K , the additional feature present in the Blume-Emery-Griffiths model. At small values of K , as for the Blume-Capel model, lines of first- and second-order phase transitions between a ferromagnetic and a paramagnetic phase are present, separated by a tricritical point whose location is different in the two ensembles. At higher values of K the phase diagram changes substantially, with the appearance of a triple point in the canonical ensemble, which does not find any correspondence in the microcanonical ensemble. Moreover, one of the first-order lines that starts from the triple point ends in a critical point, whose position in the phase diagram is different in the two ensembles. This line separates two paramagnetic phases characterized by a different value of the quadrupole moment. These features were not previously studied for other models and substantially enrich the landscape of ensemble inequivalence, identifying new aspects that had been discussed in a classification of phase transitions based on singularity theory. Finally, we discuss ergodicity breaking, which is highlighted by the presence of gaps in the accessible values of magnetization at low energies: it also displays new interesting patterns that are not present in the Blume-Capel model.

  3. Dynamic principle for ensemble control tools.

    PubMed

    Samoletov, A; Vasiev, B

    2017-11-28

    Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.

  4. Fast adaptive flat-histogram ensemble to enhance the sampling in large systems

    NASA Astrophysics Data System (ADS)

    Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing

    2015-09-01

    An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.

  5. Thermodynamics of phase-separating nanoalloys: Single particles and particle assemblies

    NASA Astrophysics Data System (ADS)

    Fèvre, Mathieu; Le Bouar, Yann; Finel, Alphonse

    2018-05-01

    The aim of this paper is to investigate the consequences of finite-size effects on the thermodynamics of nanoparticle assemblies and isolated particles. We consider a binary phase-separating alloy with a negligible atomic size mismatch, and equilibrium states are computed using off-lattice Monte Carlo simulations in several thermodynamic ensembles. First, a semi-grand-canonical ensemble is used to describe infinite assemblies of particles with the same size. When decreasing the particle size, we obtain a significant decrease of the solid/liquid transition temperatures as well as a growing asymmetry of the solid-state miscibility gap related to surface segregation effects. Second, a canonical ensemble is used to analyze the thermodynamic equilibrium of finite monodisperse particle assemblies. Using a general thermodynamic formulation, we show that a particle assembly may split into two subassemblies of identical particles. Moreover, if the overall average canonical concentration belongs to a discrete spectrum, the subassembly concentrations are equal to the semi-grand-canonical equilibrium ones. We also show that the equilibrium of a particle assembly with a prescribed size distribution combines a size effect and the fact that a given particle size assembly can adopt two configurations. Finally, we have considered the thermodynamics of an isolated particle to analyze whether a phase separation can be defined within a particle. When studying rather large nanoparticles, we found that the region in which a two-phase domain can be identified inside a particle is well below the bulk phase diagram, but the concentration of the homogeneous core remains very close to the bulk solubility limit.

  6. Modified Nose-Hoover thermostat for solid state for constant temperature molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wen-Hwa, E-mail: whchen@pme.nthu.edu.tw; National Applied Research Laboratories, Taipei 10622, Taiwan, ROC; Wu, Chun-Hung

    2011-07-10

    Nose-Hoover (NH) thermostat methods incorporated with molecular dynamics (MD) simulation have been widely used to simulate the instantaneous system temperature and feedback energy in a canonical ensemble. The method simply relates the kinetic energy to the system temperature via the particles' momenta based on the ideal gas law. However, when used in a tightly bound system such as solids, the method may suffer from deriving a lower system temperature and potentially inducing early breaking of atomic bonds at relatively high temperature due to the neglect of the effect of the potential energy of atoms based on solid state physics. Inmore » this paper, a modified NH thermostat method is proposed for solid system. The method takes into account the contribution of phonons by virtue of the vibrational energy of lattice and the zero-point energy, derived based on the Debye theory. Proof of the equivalence of the method and the canonical ensemble is first made. The modified NH thermostat is tested on different gold nanocrystals to characterize their melting point and constant volume specific heat, and also their size and temperature dependence. Results show that the modified NH method can give much more comparable results to both the literature experimental and theoretical data than the standard NH. Most importantly, the present model is the only one, among the six thermostat algorithms under comparison, that can accurately reproduce the experimental data and also the T{sup 3}-law at temperature below the Debye temperature, where the specific heat of a solid at constant volume is proportional to the cube of temperature.« less

  7. Exactly solvable random graph ensemble with extensively many short cycles

    NASA Astrophysics Data System (ADS)

    Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.

    2018-02-01

    We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.

  8. Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble

    NASA Astrophysics Data System (ADS)

    Bicci, Alberto

    2016-12-01

    In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.

  9. Ensemble inequivalence and Maxwell construction in the self-gravitating ring model

    NASA Astrophysics Data System (ADS)

    Rocha Filho, T. M.; Silvestre, C. H.; Amato, M. A.

    2018-06-01

    The statement that Gibbs equilibrium ensembles are equivalent is a base line in many approaches in the context of equilibrium statistical mechanics. However, as a known fact, for some physical systems this equivalence may not be true. In this paper we illustrate from first principles the inequivalence between the canonical and microcanonical ensembles for a system with long range interactions. We make use of molecular dynamics simulations and Monte Carlo simulations to explore the thermodynamics properties of the self-gravitating ring model and discuss on what conditions the Maxwell construction is applicable.

  10. Absence of high-temperature ballistic transport in the spin-1/2 XXX chain within the grand-canonical ensemble

    NASA Astrophysics Data System (ADS)

    Carmelo, J. M. P.; Prosen, T.

    2017-01-01

    Whether in the thermodynamic limit, vanishing magnetic field h → 0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h → 0 in the thermodynamic limit of chain length L → ∞, at high temperatures T → ∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L - 2 S spins 1/2 that are paired within Msp = L / 2 - S spin-singlet pairs. The Bethe-ansatz strings of length n = 1 and n > 1 describe a single unbound spin-singlet pair and a configuration within which n pairs are bound, respectively. In the case of n > 1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T → ∞, vanishing magnetic field h → 0 and within the grand-canonical ensemble.

  11. Thermal distributions of first, second and third quantization

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-05-01

    We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.

  12. Canonical phase diagrams of the 1D Falicov-Kimball model at T = O

    NASA Astrophysics Data System (ADS)

    Gajek, Z.; Jȩdrzejewski, J.; Lemański, R.

    1996-02-01

    The Falicov-Kimball model of spinless quantum electrons hopping on a 1-dimensional lattice and of immobile classical ions occupying some lattice sites, with only intrasite coupling between those particles, have been studied at zero temperature by means of well-controlled numerical procedures. For selected values of the unique coupling parameter U the restricted phase diagrams (based on all the periodic configurations of localized particles (ions) with period not greater than 16 lattice constants, typically) have been constructed in the grand-canonical ensemble. Then these diagrams have been translated into the canonical ensemble. Compared to the diagrams obtained in other studies our ones contain more details, in particular they give better insight into the way the mixtures of periodic phases are formed. Our study has revealed several families of new characteristic phases like the generalized most homogeneous and the generalized crenel phases, a first example of a structural phase transition and a tendency to build up an additional symmetry - the hole-particle symmetry with respect to the ions (electrons) only, as U decreases.

  13. Kirkwood-Buff Approach Rescues Overcollapse of a Disordered Protein in Canonical Protein Force Fields.

    PubMed

    Mercadante, Davide; Milles, Sigrid; Fuertes, Gustavo; Svergun, Dmitri I; Lemke, Edward A; Gräter, Frauke

    2015-06-25

    Understanding the function of intrinsically disordered proteins is intimately related to our capacity to correctly sample their conformational dynamics. So far, a gap between experimentally and computationally derived ensembles exists, as simulations show overcompacted conformers. Increasing evidence suggests that the solvent plays a crucial role in shaping the ensembles of intrinsically disordered proteins and has led to several attempts to modify water parameters and thereby favor protein-water over protein-protein interactions. This study tackles the problem from a different perspective, which is the use of the Kirkwood-Buff theory of solutions to reproduce the correct conformational ensemble of intrinsically disordered proteins (IDPs). A protein force field recently developed on such a basis was found to be highly effective in reproducing ensembles for a fragment from the FG-rich nucleoporin 153, with dimensions matching experimental values obtained from small-angle X-ray scattering and single molecule FRET experiments. Kirkwood-Buff theory presents a complementary and fundamentally different approach to the recently developed four-site TIP4P-D water model, both of which can rescue the overcollapse observed in IDPs with canonical protein force fields. As such, our study provides a new route for tackling the deficiencies of current protein force fields in describing protein solvation.

  14. A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2017-03-01

    The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.

  15. Computer simulation of the carbon activity in austenite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murch, G.E.; Thorn, R.J.

    1979-02-01

    Carbon activity in austenite is described in terms of an Ising-like f.c.c. lattice gas model in which carbon interstitials repel only at the distance of nearest neighbors. A Monte Carlo simulation method in the petit canonical ensemble is employed to calculate directly the carbon activity as a function of composition and temperature. The computed activities are in satisfactory agreement with the experimental data, similarly for the decompostion of the activity to the partial molar enthalpy and entropy.

  16. Microcanonical thermodynamics and statistical fragmentation of dissipative systems. The topological structure of the N-body phase space

    NASA Astrophysics Data System (ADS)

    Gross, D. H. E.

    1997-01-01

    This review is addressed to colleagues working in different fields of physics who are interested in the concepts of microcanonical thermodynamics, its relation and contrast to ordinary, canonical or grandcanonical thermodynamics, and to get a first taste of the wide area of new applications of thermodynamical concepts like hot nuclei, hot atomic clusters and gravitating systems. Microcanonical thermodynamics describes how the volume of the N-body phase space depends on the globally conserved quantities like energy, angular momentum, mass, charge, etc. Due to these constraints the microcanonical ensemble can behave quite differently from the conventional, canonical or grandcanonical ensemble in many important physical systems. Microcanonical systems become inhomogeneous at first-order phase transitions, or with rising energy, or with external or internal long-range forces like Coulomb, centrifugal or gravitational forces. Thus, fragmentation of the system into a spatially inhomogeneous distribution of various regions of different densities and/or of different phases is a genuine characteristic of the microcanonical ensemble. In these cases which are realized by the majority of realistic systems in nature, the microcanonical approach is the natural statistical description. We investigate this most fundamental form of thermodynamics in four different nontrivial physical cases: (I) Microcanonical phase transitions of first and second order are studied within the Potts model. The total energy per particle is a nonfluctuating order parameter which controls the phase which the system is in. In contrast to the canonical form the microcanonical ensemble allows to tune the system continuously from one phase to the other through the region of coexisting phases by changing the energy smoothly. The configurations of coexisting phases carry important informations about the nature of the phase transition. This is more remarkable as the canonical ensemble is blind against these configurations. It is shown that the three basic quantities which specify a phase transition of first order - Transition temperature, latent heat, and interphase surface entropy - can be well determined for finite systems from the caloric equation of state T( E) in the coexistence region. Their values are already for a lattice of only ~ 30 ∗ 30 spins close to the ones of the corresponding infinite system. The significance of the backbending of the caloric equation of state T( E) is clarified. It is the signal for a phase transition of first order in a finite isolated system. (II) Fragmentation is shown to be a specific and generic phase transition of finite systems. The caloric equation of state T( E) for hot nuclei is calculated. The phase transition towards fragmentation can unambiguously be identified by the anomalies in T( E). As microcanonical thermodynamics is a full N-body theory it determines all many-body correlations as well. Consequently, various statistical multi-fragment correlations are investigated which give insight into the details of the equilibration mechanism. (III) Fragmentation of neutral and multiply charged atomic clusters is the next example of a realistic application of microcanonical thermodynamics. Our simulation method, microcanonical Metropolis Monte Carlo, combines the explicit microscopic treatment of the fragmentational degrees of freedom with the implicit treatment of the internal degrees of freedom of the fragments described by the experimental bulk specific heat. This micro-macro approach allows us to study the fragmentation of also larger fragments. Characteristic details of the fission of multiply charged metal clusters find their explanation by the different bulk properties. (IV) Finally, the fragmentation of strongly rotating nuclei is discussed as an example for a microcanonical ensemble under the action of a two-dimensional repulsive force.

  17. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    NASA Astrophysics Data System (ADS)

    Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  18. Equilibrium statistical mechanics of self-consistent wave-particle system

    NASA Astrophysics Data System (ADS)

    Elskens, Yves

    2005-10-01

    The equilibrium distribution of N particles and M waves (e.g. Langmuir) is analysed in the weak-coupling limit for the self-consistent hamiltonian model H = ∑rpr^2 /(2m) + ∑jφjIj+ ɛ∑r,j(βj/ kj) (kjxr- θj) [1]. In the canonical ensemble, with temperature T and reservoir velocity v < jφj/kj, the wave intensities are almost independent and exponentially distributed, with expectation = kBT / (φj- kjv). These equilibrium predictions are in agreement with Monte Carlo samplings [2] and with direct simulations of the dynamics, indicating equivalence between canonical and microcanonical ensembles. [1] Y. Elskens and D.F. Escande, Microscopic dynamics of plasmas and chaos (IoP publishing, Bristol, 2003). [2] M-C. Firpo and F. Leyvraz, 30th EPS conf. contr. fusion and plasma phys., P-2.8 (2003).

  19. Higher moments of multiplicity fluctuations in a hadron-resonance gas with exact conservation laws

    NASA Astrophysics Data System (ADS)

    Fu, Jing-Hua

    2017-09-01

    Higher moments of multiplicity fluctuations of hadrons produced in central nucleus-nucleus collisions are studied within the hadron-resonance gas model in the canonical ensemble. Exact conservation of three charges, baryon number, electric charge, and strangeness is enforced in the large volume limit. Moments up to the fourth order of various particles are calculated at CERN Super Proton Synchrotron, BNL Relativistic Heavy Ion Collider (RHIC), and CERN Large Hadron Collider energies. The asymptotic fluctuations within a simplified model with only one conserved charge in the canonical ensemble are discussed where simple analytical expressions for moments of multiplicity distributions can be obtained. Moments products of net-proton, net-kaon, and net-charge distributions in Au + Au collisions at RHIC energies are calculated. The pseudorapidity coverage dependence of net-charge fluctuation is discussed.

  20. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  1. Ensemble Canonical Correlation Prediction of Seasonal Precipitation Over the United States: Raising the Bar for Dynamical Model Forecasts

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Kim, Kyu-Myong; Shen, S. P.

    2001-01-01

    This paper presents preliminary results of an ensemble canonical correlation (ECC) prediction scheme developed at the Climate and Radiation Branch, NASA/Goddard Space Flight Center for determining the potential predictability of regional precipitation, and for climate downscaling studies. The scheme is tested on seasonal hindcasts of anomalous precipitation over the continental United States using global sea surface temperature (SST) for 1951-2000. To maximize the forecast skill derived from SST, the world ocean is divided into non-overlapping sectors. The canonical SST modes for each sector are used as the predictor for the ensemble hindcasts. Results show that the ECC yields a substantial (10-25%) increase in prediction skills for all the regions of the US in every season compared to traditional CCA prediction schemes. For the boreal winter, the tropical Pacific contributes the largest potential predictability to precipitation in the southwestern and southeastern regions, while the North Pacific and the North Atlantic are responsible to the enhanced forecast skills in the Pacific Northwest, the northern Great Plains and Ohio Valley. Most importantly, the ECC increases skill for summertime precipitation prediction and substantially reduces the spring predictability barrier over all the regions of the US continent. Besides SST, the ECC is designed with the flexibility to include any number of predictor fields, such as soil moisture, snow cover and additional local observations. The enhanced ECC forecast skill provides a new benchmark for evaluating dynamical model forecasts.

  2. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.

    1994-07-01

    In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.

  3. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less

  4. A virtual-system coupled multicanonical molecular dynamics simulation: Principles and applications to free-energy landscape of protein-protein interaction with an all-atom model in explicit solvent

    NASA Astrophysics Data System (ADS)

    Higo, Junichi; Umezawa, Koji; Nakamura, Haruki

    2013-05-01

    We propose a novel generalized ensemble method, a virtual-system coupled multicanonical molecular dynamics (V-McMD), to enhance conformational sampling of biomolecules expressed by an all-atom model in an explicit solvent. In this method, a virtual system, of which physical quantities can be set arbitrarily, is coupled with the biomolecular system, which is the target to be studied. This method was applied to a system of an Endothelin-1 derivative, KR-CSH-ET1, known to form an antisymmetric homodimer at room temperature. V-McMD was performed starting from a configuration in which two KR-CSH-ET1 molecules were mutually distant in an explicit solvent. The lowest free-energy state (the most thermally stable state) at room temperature coincides with the experimentally determined native complex structure. This state was separated to other non-native minor clusters by a free-energy barrier, although the barrier disappeared with elevated temperature. V-McMD produced a canonical ensemble faster than a conventional McMD method.

  5. Correlations of occupation numbers in the canonical ensemble and application to a Bose-Einstein condensate in a one-dimensional harmonic trap

    NASA Astrophysics Data System (ADS)

    Giraud, Olivier; Grabsch, Aurélien; Texier, Christophe

    2018-05-01

    We study statistical properties of N noninteracting identical bosons or fermions in the canonical ensemble. We derive several general representations for the p -point correlation function of occupation numbers n1⋯np ¯. We demonstrate that it can be expressed as a ratio of two p ×p determinants involving the (canonical) mean occupations n1¯, ..., np¯, which can themselves be conveniently expressed in terms of the k -body partition functions (with k ≤N ). We draw some connection with the theory of symmetric functions and obtain an expression of the correlation function in terms of Schur functions. Our findings are illustrated by revisiting the problem of Bose-Einstein condensation in a one-dimensional harmonic trap, for which we get analytical results. We get the moments of the occupation numbers and the correlation between ground-state and excited-state occupancies. In the temperature regime dominated by quantum correlations, the distribution of the ground-state occupancy is shown to be a truncated Gumbel law. The Gumbel law, describing extreme-value statistics, is obtained when the temperature is much smaller than the Bose-Einstein temperature.

  6. Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach

    NASA Astrophysics Data System (ADS)

    Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.

    2011-03-01

    We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.

  7. Gaussian Elimination-Based Novel Canonical Correlation Analysis Method for EEG Motion Artifact Removal.

    PubMed

    Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh

    2017-01-01

    The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.

  8. Assessment of the APCC Coupled MME Suite in Predicting the Distinctive Climate Impacts of Two Flavors of ENSO during Boreal Winter

    NASA Technical Reports Server (NTRS)

    Jeong, Hye-In; Lee, Doo Young; Karumuri, Ashok; Ahn, Joong-Bae; Lee, June-Yi; Luo, Jing-Jia; Schemm, Jae-Kyung E.; Hendon, Harry H.; Braganza, Karl; Ham, Yoo-Geun

    2012-01-01

    Forecast skill of the APEC Climate Center (APCC) Multi-Model Ensemble (MME) seasonal forecast system in predicting two main types of El Nino-Southern Oscillation (ENSO), namely canonical (or cold tongue) and Modoki ENSO, and their regional climate impacts is assessed for boreal winter. The APCC MME is constructed by simple composite of ensemble forecasts from five independent coupled ocean-atmosphere climate models. Based on a hindcast set targeting boreal winter prediction for the period 19822004, we show that the MME can predict and discern the important differences in the patterns of tropical Pacific sea surface temperature anomaly between the canonical and Modoki ENSO one and four month ahead. Importantly, the four month lead MME beats the persistent forecast. The MME reasonably predicts the distinct impacts of the canonical ENSO, including the strong winter monsoon rainfall over East Asia, the below normal rainfall and above normal temperature over Australia, the anomalously wet conditions across the south and cold conditions over the whole area of USA, and the anomalously dry conditions over South America. However, there are some limitations in capturing its regional impacts, especially, over Australasia and tropical South America at a lead time of one and four months. Nonetheless, forecast skills for rainfall and temperature over East Asia and North America during ENSO Modoki are comparable to or slightly higher than those during canonical ENSO events.

  9. Entropy of network ensembles

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra

    2009-03-01

    In this paper we generalize the concept of random networks to describe network ensembles with nontrivial features by a statistical mechanics approach. This framework is able to describe undirected and directed network ensembles as well as weighted network ensembles. These networks might have nontrivial community structure or, in the case of networks embedded in a given space, they might have a link probability with a nontrivial dependence on the distance between the nodes. These ensembles are characterized by their entropy, which evaluates the cardinality of networks in the ensemble. In particular, in this paper we define and evaluate the structural entropy, i.e., the entropy of the ensembles of undirected uncorrelated simple networks with given degree sequence. We stress the apparent paradox that scale-free degree distributions are characterized by having small structural entropy while they are so widely encountered in natural, social, and technological complex systems. We propose a solution to the paradox by proving that scale-free degree distributions are the most likely degree distribution with the corresponding value of the structural entropy. Finally, the general framework we present in this paper is able to describe microcanonical ensembles of networks as well as canonical or hidden-variable network ensembles with significant implications for the formulation of network-constructing algorithms.

  10. Stresses and elastic constants of crystalline sodium, from molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.

    1985-02-01

    The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less

  11. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    PubMed

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-05

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  12. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Vincent K., E-mail: vincent.shen@nist.gov; Siderius, Daniel W.

    2014-06-28

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phasemore » transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called “breathing” of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.« less

  13. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    NASA Astrophysics Data System (ADS)

    Shen, Vincent K.; Siderius, Daniel W.

    2014-06-01

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phase transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called "breathing" of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.

  14. Determination of the critical micelle concentration in simulations of surfactant systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Andrew P.; Panagiotopoulos, Athanassios Z., E-mail: azp@princeton.edu

    Alternative methods for determining the critical micelle concentration (cmc) are investigated using canonical and grand canonical Monte Carlo simulations of a lattice surfactant model. A common measure of the cmc is the “free” (unassociated) surfactant concentration in the presence of micellar aggregates. Many prior simulations of micellizing systems have observed a decrease in the free surfactant concentration with overall surfactant loading for both ionic and nonionic surfactants, contrary to theoretical expectations from mass-action models of aggregation. In the present study, we investigate a simple lattice nonionic surfactant model in implicit solvent, for which highly reproducible simulations are possible in bothmore » the canonical (NVT) and grand canonical (μVT) ensembles. We confirm the previously observed decrease of free surfactant concentration at higher overall loadings and propose an algorithm for the precise calculation of the excluded volume and effective concentration of unassociated surfactant molecules in the accessible volume of the solution. We find that the cmc can be obtained by correcting the free surfactant concentration for volume exclusion effects resulting from the presence of micellar aggregates. We also develop an improved method for determination of the cmc based on the maximum in curvature for the osmotic pressure curve determined from μVT simulations. Excellent agreement in cmc and other micellar properties between NVT and μVT simulations of different system sizes is observed. The methodological developments in this work are broadly applicable to simulations of aggregating systems using any type of surfactant model (atomistic/coarse grained) or solvent description (explicit/implicit)« less

  15. Chemical Frustration in the Protein Folding Landscape: Grand Canonical Ensemble Simulations of Cytochrome c

    PubMed Central

    Weinkam, Patrick; Romesberg, Floyd E.; Wolynes, Peter G.

    2010-01-01

    A grand canonical formalism is developed to combine discrete simulations for chemically distinct species in equilibrium. Each simulation is based on a perturbed funneled landscape. The formalism is illustrated using the alkaline-induced transitions of cytochrome c as observed by FTIR spectroscopy and with various other experimental approaches. The grand canonical simulation method accounts for the acid/base chemistry of deprotonation, the inorganic chemistry of heme ligation and misligation, and the minimally frustrated folding energy landscape, thus elucidating the physics of protein folding involved with an acid/base titration of a protein. The formalism combines simulations for each of the relevant chemical species, varying by protonation and ligation states. In contrast to models based on perfectly funneled energy landscapes that contain only contacts found in the native structure, the current study introduces “chemical frustration” from deprotonation and misligation that gives rise to many intermediates at alkaline pH. While the nature of these intermediates cannot be easily inferred from available experimental data, the current study provides specific structural details of these intermediates thus extending our understanding of how cytochrome c changes with increasing pH. The results demonstrate the importance of chemical frustration for understanding biomolecular energy landscapes. PMID:19199810

  16. Finite-size anomalies of the Drude weight: Role of symmetries and ensembles

    NASA Astrophysics Data System (ADS)

    Sánchez, R. J.; Varma, V. K.

    2017-12-01

    We revisit the numerical problem of computing the high temperature spin stiffness, or Drude weight, D of the spin-1 /2 X X Z chain using exact diagonalization to systematically analyze its dependence on system symmetries and ensemble. Within the canonical ensemble and for states with zero total magnetization, we find D vanishes exactly due to spin-inversion symmetry for all but the anisotropies Δ˜M N=cos(π M /N ) with N ,M ∈Z+ coprimes and N >M , provided system sizes L ≥2 N , for which states with different spin-inversion signature become degenerate due to the underlying s l2 loop algebra symmetry. All these loop-algebra degenerate states carry finite currents which we conjecture [based on data from the system sizes and anisotropies Δ˜M N (with N

  17. The Brandeis Dice Problem and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Enk, Steven J.

    2014-11-01

    Jaynes invented the Brandeis Dice Problem as a simple illustration of the MaxEnt (Maximum Entropy) procedure that he had demonstrated to work so well in Statistical Mechanics. I construct here two alternative solutions to his toy problem. One, like Jaynes' solution, uses MaxEnt and yields an analog of the canonical ensemble, but at a different level of description. The other uses Bayesian updating and yields an analog of the micro-canonical ensemble. Both, unlike Jaynes' solution, yield error bars, whose operational merits I discuss. These two alternative solutions are not equivalent for the original Brandeis Dice Problem, but become so in what must, therefore, count as the analog of the thermodynamic limit, M-sided dice with M → ∞. Whereas the mathematical analogies between the dice problem and Stat Mech are quite close, there are physical properties that the former lacks but that are crucial to the workings of the latter. Stat Mech is more than just MaxEnt.

  18. A mathematical theorem as the basis for the second law: Thomson's formulation applied to equilibrium

    NASA Astrophysics Data System (ADS)

    Allahverdyan, A. E.; Nieuwenhuizen, Th. M.

    2002-03-01

    There are several formulations of the second law, and they may, in principle, have different domains of validity. Here a simple mathematical theorem is proven which serves as the most general basis for the second law, namely the Thomson formulation (“cyclic changes cost energy”), applied to equilibrium. This formulation of the second law is a property akin to particle conservation (normalization of the wave function). It has been strictly proven for a canonical ensemble, and made plausible for a micro-canonical ensemble. As the derivation does not assume time-inversion invariance, it is applicable to situations where persistent currents occur. This clear-cut derivation allows to revive the “no perpetuum mobile in equilibrium” formulation of the second law and to criticize some assumptions which are widespread in literature. The result puts recent results devoted to foundations and limitations of the second law in proper perspective, and structurizes this relatively new field of research.

  19. Unimodular lattice triangulations as small-world and scale-free random graphs

    NASA Astrophysics Data System (ADS)

    Krüger, B.; Schmidt, E. M.; Mecke, K.

    2015-02-01

    Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.

  20. The melting point of lithium: an orbital-free first-principles molecular dynamics study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Mohan; Hung, Linda; Huang, Chen

    2013-08-25

    The melting point of liquid lithium near zero pressure is studied with large-scale orbital-free first-principles molecular dynamics (OF-FPMD) in the isobaric-isothermal ensemble. Here, we adopt the Wang-Govind-Carter (WGC) functional as our kinetic energy density functional (KEDF) and construct a bulk-derived local pseudopotential (BLPS) for Li. Our simulations employ both the ‘heat-until-melts’ method and the coexistence method. We predict 465 K as an upper bound of the melting point of Li from the ‘heat-until-melts’ method, while we predict 434 K as the melting point of Li from the coexistence method. These values compare well with an experimental melting point of 453more » K at zero pressure. Furthermore, we calculate a few important properties of liquid Li including the diffusion coefficients, pair distribution functions, static structure factors, and compressibilities of Li at 470 K and 725 K in the canonical ensemble. This theoretically-obtained results show good agreement with known experimental results, suggesting that OF-FPMD using a non-local KEDF and a BLPS is capable of accurately describing liquid metals.« less

  1. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    PubMed

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  2. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    NASA Astrophysics Data System (ADS)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  3. Exploring first-order phase transitions with population annealing

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Shchur, Lev N.; Janke, Wolfhard

    2017-03-01

    Population annealing is a hybrid of sequential and Markov chain Monte Carlo methods geared towards the efficient parallel simulation of systems with complex free-energy landscapes. Systems with first-order phase transitions are among the problems in computational physics that are difficult to tackle with standard methods such as local-update simulations in the canonical ensemble, for example with the Metropolis algorithm. It is hence interesting to see whether such transitions can be more easily studied using population annealing. We report here our preliminary observations from population annealing runs for the two-dimensional Potts model with q > 4, where it undergoes a first-order transition.

  4. Molecular dynamics of liquid crystals

    NASA Astrophysics Data System (ADS)

    Sarman, Sten

    1997-02-01

    We derive Green-Kubo relations for the viscosities of a nematic liquid crystal. The derivation is based on the application of a Gaussian constraint algorithm that makes the director angular velocity of a liquid crystal a constant of motion. Setting this velocity equal to zero means that a director-based coordinate system becomes an inertial frame and that the constraint torques do not do any work on the system. The system consequently remains in equilibrium. However, one generates a different equilibrium ensemble. The great advantage of this ensemble is that the Green-Kubo relations for the viscosities become linear combinations of time correlation function integrals, whereas they are complicated rational functions in the conventional canonical ensemble. This facilitates the numerical evaluation of the viscosities by molecular dynamics simulations.

  5. Correlated Hopping in the 1D Falicov--Kimball Model

    NASA Astrophysics Data System (ADS)

    Gajek, Z.; Lemanski, R.

    2001-10-01

    Ground state phase diagrams in the canonical ensemble of the one-dimensional Falicov-Kimball Model (FKM) with the correlated hopping are presented for several values of the model parameters. As compare to the conventional FKM, the diagrams exhibit a loss of the particle--hole symmetry.

  6. Computational scheme for pH-dependent binding free energy calculation with explicit solvent.

    PubMed

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R

    2016-01-01

    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.

  7. Thermodynamic characterization of synchronization-optimized oscillator networks

    NASA Astrophysics Data System (ADS)

    Yanagita, Tatsuo; Ichinomiya, Takashi

    2014-12-01

    We consider a canonical ensemble of synchronization-optimized networks of identical oscillators under external noise. By performing a Markov chain Monte Carlo simulation using the Kirchhoff index, i.e., the sum of the inverse eigenvalues of the Laplacian matrix (as a graph Hamiltonian of the network), we construct more than 1 000 different synchronization-optimized networks. We then show that the transition from star to core-periphery structure depends on the connectivity of the network, and is characterized by the node degree variance of the synchronization-optimized ensemble. We find that thermodynamic properties such as heat capacity show anomalies for sparse networks.

  8. Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture

    NASA Astrophysics Data System (ADS)

    Potoff, Jeffrey J.; Panagiotopoulos, Athanassios Z.

    1998-12-01

    Monte Carlo simulations in the grand canonical ensemble were used to obtain liquid-vapor coexistence curves and critical points of the pure fluid and a binary mixture of Lennard-Jones particles. Critical parameters were obtained from mixed-field finite-size scaling analysis and subcritical coexistence data from histogram reweighting methods. The critical parameters of the untruncated Lennard-Jones potential were obtained as Tc*=1.3120±0.0007, ρc*=0.316±0.001 and pc*=0.1279±0.0006. Our results for the critical temperature and pressure are not in agreement with the recent study of Caillol [J. Chem. Phys. 109, 4885 (1998)] on a four-dimensional hypersphere. Mixture parameters were ɛ1=2ɛ2 and σ1=σ2, with Lorentz-Berthelot combining rules for the unlike-pair interactions. We determined the critical point at T*=1.0 and pressure-composition diagrams at three temperatures. Our results have much smaller statistical uncertainties relative to comparable Gibbs ensemble simulations.

  9. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.

    PubMed

    Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N

    2015-04-21

    Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.

  10. A first principles calculation and statistical mechanics modeling of defects in Al-H system

    NASA Astrophysics Data System (ADS)

    Ji, Min; Wang, Cai-Zhuang; Ho, Kai-Ming

    2007-03-01

    The behavior of defects and hydrogen in Al was investigated by first principles calculations and statistical mechanics modeling. The formation energy of different defects in Al+H system such as Al vacancy, H in institution and multiple H in Al vacancy were calculated by first principles method. Defect concentration in thermodynamical equilibrium was studied by total free energy calculation including configuration entropy and defect-defect interaction from low concentration limit to hydride limit. In our grand canonical ensemble model, hydrogen chemical potential under different environment plays an important role in determing the defect concentration and properties in Al-H system.

  11. Physical Premium Principle: A New Way for Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Darooneh, Amir H.

    2005-03-01

    In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.

  12. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  13. Microcanonical and resource-theoretic derivations of the thermal state of a quantum system with noncommuting charges

    PubMed Central

    Yunger Halpern, Nicole; Faist, Philippe; Oppenheim, Jonathan; Winter, Andreas

    2016-01-01

    The grand canonical ensemble lies at the core of quantum and classical statistical mechanics. A small system thermalizes to this ensemble while exchanging heat and particles with a bath. A quantum system may exchange quantities represented by operators that fail to commute. Whether such a system thermalizes and what form the thermal state has are questions about truly quantum thermodynamics. Here we investigate this thermal state from three perspectives. First, we introduce an approximate microcanonical ensemble. If this ensemble characterizes the system-and-bath composite, tracing out the bath yields the system's thermal state. This state is expected to be the equilibrium point, we argue, of typical dynamics. Finally, we define a resource-theory model for thermodynamic exchanges of noncommuting observables. Complete passivity—the inability to extract work from equilibrium states—implies the thermal state's form, too. Our work opens new avenues into equilibrium in the presence of quantum noncommutation. PMID:27384494

  14. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  15. Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics

    DOE PAGES

    Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.; ...

    2015-04-21

    Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less

  16. Development of isothermal-isobaric replica-permutation method for molecular dynamics and Monte Carlo simulations and its application to reveal temperature and pressure dependence of folded, misfolded, and unfolded states of chignolin

    NASA Astrophysics Data System (ADS)

    Yamauchi, Masataka; Okumura, Hisashi

    2017-11-01

    We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.

  17. A new potential for the numerical simulations of electrolyte solutions on a hypersphere

    NASA Astrophysics Data System (ADS)

    Caillol, Jean-Michel

    1993-12-01

    We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.

  18. Methods for Monte Carlo simulations of biomacromolecules

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2010-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

  19. Lattice black branes: sphere packing in general relativity

    NASA Astrophysics Data System (ADS)

    Dias, Óscar J. C.; Santos, Jorge E.; Way, Benson

    2018-05-01

    We perturbatively construct asymptotically R^{1,3}× T^2 black branes with multiple inhomogeneous directions and show that some of them are thermodynamically preferred over uniform branes in both the microcanonical and canonical ensembles. This demonstrates that, unlike five-dimensional black strings, the instability of some unstable black branes has a plausible endpoint that does not require a violation of cosmic censorship.

  20. An estimate of the bulk viscosity of the hadronic medium

    NASA Astrophysics Data System (ADS)

    Sarwar, Golam; Chatterjee, Sandeep; Alam, Jane

    2017-05-01

    The bulk viscosity (ζ) of the hadronic medium has been estimated within the ambit of the Hadron Resonance Gas (HRG) model including the Hagedorn density of states. The HRG thermodynamics within a grand canonical ensemble provides the mean hadron number as well as its fluctuation. The fluctuation in the chemical composition of the hadronic medium in the grand canonical ensemble can result in non-zero divergence of the hadronic fluid flow velocity, allowing us to estimate the ζ of the hadronic matter up to a relaxation time. We study the influence of the hadronic spectrum on ζ and find its correlation with the conformal symmetry breaking measure, ε -3P. We estimate ζ along the contours with constant, S/{N}B (total entropy/net baryon number) in the T-μ plane (temperature-baryonic chemical potential) for S/{N}B=30,45 and 300. We also assess the value of ζ on the chemical freeze-out curve for various centers of mass energy (\\sqrt{{s}{NN}}) and find that the bulk viscosity to entropy density ratio, \\zeta /s is larger in the energy range of the beam energy scan program of RHIC, low energy SPS run, AGS, NICA and FAIR, than LHC energies.

  1. Relaxation and thermalization in the one-dimensional Bose-Hubbard model: A case study for the interaction quantum quench from the atomic limit

    NASA Astrophysics Data System (ADS)

    Heidrich-Meisner, Fabian; Pollet, Lode; Sorg, Stefan; Vidmar, Lev

    2015-03-01

    We study the relaxation dynamics and thermalization in the one-dimensional Bose-Hubbard model induced by a global interaction quench. Specifically, we start from an initial state that has exactly one boson per site and is the ground state of a system with infinitely strong repulsive interactions at unit filling. The same interaction quench was realized in a recent experiment. Using exact diagonalization and the density-matrix renormalization-group method, we compute the time dependence of such observables as the multiple occupancy and the momentum distribution function. We discuss our numerical results in the framework of the eigenstate thermalization hypothesis and we observe that the microcanonical ensemble describes the time averages of many observables reasonably well for small and intermediate interaction strength. Moreover, the diagonal and the canonical ensembles are practically identical for our initial conditions already on the level of their respective energy distributions for small interaction strengths. Supported by the DFG through FOR 801 and the Alexander von Humboldt foundation.

  2. Condensate fluctuations of interacting Bose gases within a microcanonical ensemble.

    PubMed

    Wang, Jianhui; He, Jizhou; Ma, Yongli

    2011-05-01

    Based on counting statistics and Bogoliubov theory, we present a recurrence relation for the microcanonical partition function for a weakly interacting Bose gas with a finite number of particles in a cubic box. According to this microcanonical partition function, we calculate numerically the distribution function, condensate fraction, and condensate fluctuations for a finite and isolated Bose-Einstein condensate. For ideal and weakly interacting Bose gases, we compare the condensate fluctuations with those in the canonical ensemble. The present approach yields an accurate account of the condensate fluctuations for temperatures close to the critical region. We emphasize that the interactions between excited atoms turn out to be important for moderate temperatures.

  3. First-Principles Prediction of Densities of Amorphous Materials: The Case of Amorphous Silicon

    NASA Astrophysics Data System (ADS)

    Furukawa, Yoritaka; Matsushita, Yu-ichiro

    2018-02-01

    A novel approach to predict the atomic densities of amorphous materials is explored on the basis of Car-Parrinello molecular dynamics (CPMD) in density functional theory. Despite the determination of the atomic density of matter being crucial in understanding its physical properties, no first-principles method has ever been proposed for amorphous materials until now. We have extended the conventional method for crystalline materials in a natural manner and pointed out the importance of the canonical ensemble of the total energy in the determination of the atomic densities of amorphous materials. To take into account the canonical distribution of the total energy, we generate multiple amorphous structures with several different volumes by CPMD simulations and average the total energies at each volume. The density is then determined as the one that minimizes the averaged total energy. In this study, this approach is implemented for amorphous silicon (a-Si) to demonstrate its validity, and we have determined the density of a-Si to be 4.1% lower and its bulk modulus to be 28 GPa smaller than those of the crystal, which are in good agreement with experiments. We have also confirmed that generating samples through classical molecular dynamics simulations produces a comparable result. The findings suggest that the presented method is applicable to other amorphous systems, including those for which experimental knowledge is lacking.

  4. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  5. Shear flow simulations of biaxial nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Sarman, Sten

    1997-08-01

    We have calculated the viscosities of a biaxial nematic liquid crystal phase of a variant of the Gay-Berne fluid [J. G. Gay and B. J. Berne, J. Chem. Phys. 74, 3316 (1981)] by performing molecular dynamics simulations. The equations of motion have been augmented by a director constraint torque that fixes the orientation of the directors. This makes it possible to fix them at different angles relative to the stream lines in shear flow simulations. In equilibrium simulations the constraints generate a new ensemble. One finds that the Green-Kubo relations for the viscosities become linear combinations of time correlation function integrals in this ensemble whereas they are complicated rational functions in the conventional canonical ensemble. We have evaluated these Green-Kubo relations for all the shear viscosities and all the twist viscosities. We have also calculated the alignment angles, which are functions of the viscosity coefficients. We find that there are three real alignment angles but a linear stability analysis shows that only one of them corresponds to a stable director orientation. The Green-Kubo results have been cross checked by nonequilibrium shear flow simulations. The results from the different methods agree very well. Finally, we have evaluated the Miesowicz viscosities [D. Baalss, Z. Naturforsch. Teil A 45, 7 (1990)]. They vary by more than 2 orders of magnitude. The viscosity is consequently highly orientation dependent.

  6. Troublesome aspects of the Renyi-MaxEnt treatment.

    PubMed

    Plastino, A; Rocca, M C; Pennini, F

    2016-07-01

    We study in great detail the possible existence of a Renyi-associated thermodynamics, with negative results. In particular, we uncover a hidden relation in Renyi's variational problem (MaxEnt). This relation connects the two associated Lagrange multipliers (canonical ensemble) with the mean energy 〈U〉 and the Renyi parameter α. As a consequence of such relation, we obtain anomalous Renyi-MaxEnt thermodynamic results.

  7. Itinerant ferromagnetism in an interacting Fermi gas with mass imbalance

    NASA Astrophysics Data System (ADS)

    von Keyserlingk, C. W.; Conduit, G. J.

    2011-05-01

    We study the emergence of itinerant ferromagnetism in an ultracold atomic gas with a variable mass ratio between the up- and down-spin species. Mass imbalance breaks the SU(2) spin symmetry, leading to a modified Stoner criterion. We first elucidate the phase behavior in both the grand canonical and canonical ensembles. Second, we apply the formalism to a harmonic trap to demonstrate how a mass imbalance delivers unique experimental signatures of ferromagnetism. These could help future experiments to better identify the putative ferromagnetic state. Furthermore, we highlight how a mass imbalance suppresses the three-body loss processes that handicap the formation of a ferromagnetic state. Finally, we study the time-dependent formation of the ferromagnetic phase following a quench in the interaction strength.

  8. From grand-canonical density functional theory towards rational compound design

    NASA Astrophysics Data System (ADS)

    von Lilienfeld, Anatole

    2008-03-01

    The fundamental challenge of rational compound design, ie the reverse engineering of chemical compounds with predefined specific properties, originates in the high-dimensional combinatorial nature of chemical space. Chemical space is the hyper-space of a given set of molecular observables that is spanned by the grand-canonical variables (particle densities of electrons and nuclei) which define chemical composition. A brief but rigorous description of chemical space within the molecular grand-canonical ensemble multi-component density functional theory framework will be given [1]. Numerical results will be presented for intermolecular energies as a continuous function of alchemical variations within a neutral and isoelectronic 10 proton system, including CH4, NH3, H2O, and HF, interacting with formic acid [2]. Furthermore, engineering the Fermi level through alchemical generation of boron-nitrogen doped mutants of benzene shall be discussed [3].[1] von Lilienfeld and Tuckerman JCP 125 154104 (2006)[2] von Lilienfeld and Tuckerman JCTC 3 1083 (2007)[3] Marcon et al. JCP 127 064305 (2007)

  9. Simulating adsorptive expansion of zeolites: application to biomass-derived solutions in contact with silicalite.

    PubMed

    Santander, Julian E; Tsapatsis, Michael; Auerbach, Scott M

    2013-04-16

    We have constructed and applied an algorithm to simulate the behavior of zeolite frameworks during liquid adsorption. We applied this approach to compute the adsorption isotherms of furfural-water and hydroxymethyl furfural (HMF)-water mixtures adsorbing in silicalite zeolite at 300 K for comparison with experimental data. We modeled these adsorption processes under two different statistical mechanical ensembles: the grand canonical (V-Nz-μg-T or GC) ensemble keeping volume fixed, and the P-Nz-μg-T (osmotic) ensemble allowing volume to fluctuate. To optimize accuracy and efficiency, we compared pure Monte Carlo (MC) sampling to hybrid MC-molecular dynamics (MD) simulations. For the external furfural-water and HMF-water phases, we assumed the ideal solution approximation and employed a combination of tabulated data and extended ensemble simulations for computing solvation free energies. We found that MC sampling in the V-Nz-μg-T ensemble (i.e., standard GCMC) does a poor job of reproducing both the Henry's law regime and the saturation loadings of these systems. Hybrid MC-MD sampling of the V-Nz-μg-T ensemble, which includes framework vibrations at fixed total volume, provides better results in the Henry's law region, but this approach still does not reproduce experimental saturation loadings. Pure MC sampling of the osmotic ensemble was found to approach experimental saturation loadings more closely, whereas hybrid MC-MD sampling of the osmotic ensemble quantitatively reproduces such loadings because the MC-MD approach naturally allows for locally anisotropic volume changes wherein some pores expand whereas others contract.

  10. Tricritical and Critical Exponents in Microcanonical Ensemble of Systems with Long-Range Interactions

    NASA Astrophysics Data System (ADS)

    Li, Liang-Sheng

    2016-12-01

    We explore the tricritical points and the critical lines of both Blume-Emery-Grifnths and Ising model within long-range interactions in the microcanonical ensemble. For K = K MTP , the tricritical exponents take the values β = 1/4, 1 = γ- ≠ γ+ = 1/2 and 0 = α- ≠ α+ = -1/2, which disagree with classical (mean held) values. When K > K MTP , the phase transition becomes second order and the critical exponents have classical values except close to the canonical tricritical parameters (K CTP ), where the values of the critical expoents become β = 1/2, 1 = γ- ≠ γ+ = 2 and 0 = α- ≠ α+ = 1. Supported by the National Natural Science Foundation of China under Grant No. 11104032

  11. Continuous-variable controlled-Z gate using an atomic ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Mingfeng; Jiang Nianquan; Jin Qingli

    2011-06-15

    The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less

  12. The development of novel simulation methodologies and intermolecular potential models for real fluids

    NASA Astrophysics Data System (ADS)

    Errington, Jeffrey Richard

    This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.

  13. Predicting structural properties of fluids by thermodynamic extrapolation

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.

    2018-05-01

    We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.

  14. THERMUS—A thermal model package for ROOT

    NASA Astrophysics Data System (ADS)

    Wheaton, S.; Cleymans, J.; Hauer, M.

    2009-01-01

    THERMUS is a package of C++ classes and functions allowing statistical-thermal model analyses of particle production in relativistic heavy-ion collisions to be performed within the ROOT framework of analysis. Calculations are possible within three statistical ensembles; a grand-canonical treatment of the conserved charges B, S and Q, a fully canonical treatment of the conserved charges, and a mixed-canonical ensemble combining a canonical treatment of strangeness with a grand-canonical treatment of baryon number and electric charge. THERMUS allows for the assignment of decay chains and detector efficiencies specific to each particle yield, which enables sensible fitting of model parameters to experimental data. Program summaryProgram title: THERMUS, version 2.1 Catalogue identifier: AEBW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 152 No. of bytes in distributed program, including test data, etc.: 93 581 Distribution format: tar.gz Programming language: C++ Computer: PC, Pentium 4, 1 GB RAM (not hardware dependent) Operating system: Linux: FEDORA, RedHat, etc. Classification: 17.7 External routines: Numerical Recipes in C [1], ROOT [2] Nature of problem: Statistical-thermal model analyses of heavy-ion collision data require the calculation of both primordial particle densities and contributions from resonance decay. A set of thermal parameters (the number depending on the particular model imposed) and a set of thermalized particles, with their decays specified, is required as input to these models. The output is then a complete set of primordial thermal quantities for each particle, together with the contributions to the final particle yields from resonance decay. In many applications of statistical-thermal models it is required to fit experimental particle multiplicities or particle ratios. In such analyses, the input is a set of experimental yields and ratios, a set of particles comprising the assumed hadron resonance gas formed in the collision and the constraints to be placed on the system. The thermal model parameters consistent with the specified constraints leading to the best-fit to the experimental data are then output. Solution method: THERMUS is a package designed for incorporation into the ROOT [2] framework, used extensively by the heavy-ion community. As such, it utilizes a great deal of ROOT's functionality in its operation. ROOT features used in THERMUS include its containers, the wrapper TMinuit implementing the MINUIT fitting package, and the TMath class of mathematical functions and routines. Arguably the most useful feature is the utilization of CINT as the control language, which allows interactive access to the THERMUS objects. Three distinct statistical ensembles are included in THERMUS, while additional options to include quantum statistics, resonance width and excluded volume corrections are also available. THERMUS provides a default particle list including all mesons (up to the K4∗ (2045)) and baryons (up to the Ω) listed in the July 2002 Particle Physics Booklet [3]. For each typically unstable particle in this list, THERMUS includes a text-file listing its decays. With thermal parameters specified, THERMUS calculates primordial thermal densities either by performing numerical integrations or else, in the case of the Boltzmann approximation without resonance width in the grand-canonical ensemble, by evaluating Bessel functions. Particle decay chains are then used to evaluate experimental observables (i.e. particle yields following resonance decay). Additional detector efficiency factors allow fine-tuning of the model predictions to a specific detector arrangement. When parameters are required to be constrained, use is made of the 'Numerical Recipes in C' [1] function which applies the Broyden globally convergent secant method of solving nonlinear systems of equations. Since the NRC software is not freely-available, it has to be purchased by the user. THERMUS provides the means of imposing a large number of constraints on the chosen model (amongst others, THERMUS can fix the baryon-to-charge ratio of the system, the strangeness density of the system and the primordial energy per hadron). Fits to experimental data are accomplished in THERMUS by using the ROOT TMinuit class. In its default operation, the standard χ function is minimized, yielding the set of best-fit thermal parameters. THERMUS allows the assignment of separate decay chains to each experimental input. In this way, the model is able to match the specific feed-down corrections of a particular data set. Running time: Depending on the analysis required, run-times vary from seconds (for the evaluation of particle multiplicities given a set of parameters) to several minutes (for fits to experimental data subject to constraints). References:W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2002. R. Brun, F. Rademakers, Nucl. Inst. Meth. Phys. Res. A 389 (1997) 81. See also http://root.cern.ch/. K. Hagiwara et al., Phys. Rev. D 66 (2002) 010001.

  15. Thermodynamics of third-order Lovelock-AdS black holes in the presence of Born-Infeld type nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Hendi, S. H.; Dehghani, A.

    2015-03-01

    In this paper, we obtain topological black hole solutions of third-order Lovelock gravity coupled with two classes of Born-Infeld-type nonlinear electrodynamics with anti-de Sitter asymptotic structure. We investigate geometric and thermodynamics properties of the solutions and obtain conserved quantities of the black holes. We examine the first law of thermodynamics and find that the conserved and thermodynamic quantities of the black hole solutions satisfy the first law of thermodynamics. Finally, we calculate the heat capacity and determinant of the Hessian matrix to evaluate thermal stability in both canonical and grand canonical ensembles. Moreover, we consider the extended phase space thermodynamics to obtain a generalized first law of thermodynamics as well as the extended Smarr formula.

  16. Itinerant ferromagnetism in an interacting Fermi gas with mass imbalance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyserlingk, C. W. von; Conduit, G. J.; Physics Department, Ben Gurion University, Beer Sheva 84105

    2011-05-15

    We study the emergence of itinerant ferromagnetism in an ultracold atomic gas with a variable mass ratio between the up- and down-spin species. Mass imbalance breaks the SU(2) spin symmetry, leading to a modified Stoner criterion. We first elucidate the phase behavior in both the grand canonical and canonical ensembles. Second, we apply the formalism to a harmonic trap to demonstrate how a mass imbalance delivers unique experimental signatures of ferromagnetism. These could help future experiments to better identify the putative ferromagnetic state. Furthermore, we highlight how a mass imbalance suppresses the three-body loss processes that handicap the formation ofmore » a ferromagnetic state. Finally, we study the time-dependent formation of the ferromagnetic phase following a quench in the interaction strength.« less

  17. Energy dependence of strangeness production and event-byevent fluctuations

    NASA Astrophysics Data System (ADS)

    Rustamov, Anar

    2018-02-01

    We review the energy dependence of strangeness production in nucleus-nucleus collisions and contrast it with the experimental observations in pp and p-A collisions at LHC energies as a function of the charged particle multiplicities. For the high multiplicity final states the results from pp and p-Pb reactions systematically approach the values obtained from Pb-Pb collisions. In statistical models this implies an approach to the thermodynamic limit, where differences of mean multiplicities between various formalisms, such as Canonical and Grand Canonical Ensembles, vanish. Furthermore, we report on event-by-event net-proton fluctuations as measured by STAR at RHIC/BNL and by ALICE at LHC/CERN and discuss various non-dynamical contributions to these measurements, which should be properly subtracted before comparison to theoretical calculations on dynamical net-baryon fluctuations.

  18. Products of random matrices from fixed trace and induced Ginibre ensembles

    NASA Astrophysics Data System (ADS)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  19. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  20. A statistical analysis of the dependency of closure assumptions in cumulus parameterization on the horizontal resolution

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    1994-01-01

    Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.

  1. Automated sampling assessment for molecular simulations using the effective sample size

    PubMed Central

    Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418

  2. Hidden Structural Codes in Protein Intrinsic Disorder.

    PubMed

    Borkosky, Silvia S; Camporeale, Gabriela; Chemes, Lucía B; Risso, Marikena; Noval, María Gabriela; Sánchez, Ignacio E; Alonso, Leonardo G; de Prat Gay, Gonzalo

    2017-10-17

    Intrinsic disorder is a major structural category in biology, accounting for more than 30% of coding regions across the domains of life, yet consists of conformational ensembles in equilibrium, a major challenge in protein chemistry. Anciently evolved papillomavirus genomes constitute an unparalleled case for sequence to structure-function correlation in cases in which there are no folded structures. E7, the major transforming oncoprotein of human papillomaviruses, is a paradigmatic example among the intrinsically disordered proteins. Analysis of a large number of sequences of the same viral protein allowed for the identification of a handful of residues with absolute conservation, scattered along the sequence of its N-terminal intrinsically disordered domain, which intriguingly are mostly leucine residues. Mutation of these led to a pronounced increase in both α-helix and β-sheet structural content, reflected by drastic effects on equilibrium propensities and oligomerization kinetics, and uncovers the existence of local structural elements that oppose canonical folding. These folding relays suggest the existence of yet undefined hidden structural codes behind intrinsic disorder in this model protein. Thus, evolution pinpoints conformational hot spots that could have not been identified by direct experimental methods for analyzing or perturbing the equilibrium of an intrinsically disordered protein ensemble.

  3. Nonlife Insurance Pricing:

    NASA Astrophysics Data System (ADS)

    Darooneh, Amir H.

    We consider the insurance company as a physical system which is immersed in its environment (the financial market). The insurer company interacts with the market by exchanging the money through the payments for loss claims and receiving the premium. Here, in the equilibrium state, we obtain the premium by using the canonical ensemble theory, and compare it with the Esscher principle, the well-known formula in actuary for premium calculation. We simulate the case of car insurance for quantitative comparison.

  4. California Drought and the 2015-2016 El Niño

    NASA Astrophysics Data System (ADS)

    Cash, B.

    2017-12-01

    California winter rainfall is examined in observations and data from the North American Multi-Model Ensemble (NMME) and Project Metis, a new suite of seasonal integrations made using the operational European Centre for Medium-Range Weather Forecasts model. We focus on the 2015-2016 season, and the non-canonical response to the major El Niño event that occurred. We show that the Metis ensemble mean is capable of distinguishing between the response to the 1997/98 and 2015/16 events, while the two events are more similar in the NMME. We also show that unpredicted variations in the atmospheric circulation in the north Pacific significantly affect southern California rainfall totals. Improving prediction of these variations is thus a key target for improving seasonal rainfall predictions for this region.

  5. Hamiltonian mean-field model: effect of temporal perturbation in coupling matrix

    NASA Astrophysics Data System (ADS)

    Bhadra, Nivedita; Patra, Soumen K.

    2018-05-01

    The Hamiltonian mean-field (HMF) model is a system of fully coupled rotators which exhibits a second-order phase transition at some critical energy in its canonical ensemble. We investigate the case where the interaction between the rotors is governed by a time-dependent coupling matrix. Our numerical study reveals a shift in the critical point due to the temporal modulation. The shift in the critical point is shown to be independent of the modulation frequency above some threshold value, whereas the impact of the amplitude of modulation is dominant. In the microcanonical ensemble, the system with constant coupling reaches a quasi-stationary state (QSS) at an energy near the critical point. Our result indicates that the QSS subsists in presence of such temporal modulation of the coupling parameter.

  6. A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten

    NASA Astrophysics Data System (ADS)

    Samin, Adib J.; Zhang, Jinsuo

    2016-07-01

    In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlo simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.

  7. Brownian dynamics simulation of fission yeast mitotic spindle formation

    NASA Astrophysics Data System (ADS)

    Edelmaier, Christopher

    2014-03-01

    The mitotic spindle segregates chromosomes during mitosis. The dynamics that establish bipolar spindle formation are not well understood. We have developed a computational model of fission-yeast mitotic spindle formation using Brownian dynamics and kinetic Monte Carlo methods. Our model includes rigid, dynamic microtubules, a spherical nuclear envelope, spindle pole bodies anchored in the nuclear envelope, and crosslinkers and crosslinking motor proteins. Crosslinkers and crosslinking motor proteins attach and detach in a grand canonical ensemble, and exert forces and torques on the attached microtubules. We have modeled increased affinity for crosslinking motor attachment to antiparallel microtubule pairs, and stabilization of microtubules in the interpolar bundle. We study parameters controlling the stability of the interpolar bundle and assembly of a bipolar spindle from initially adjacent spindle-pole bodies.

  8. Calibration and validation of coarse-grained models of atomic systems: application to semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley

    2014-07-01

    Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.

  9. Correlation between Gini index and mobility in a stochastic kinetic model of economic exchange

    NASA Astrophysics Data System (ADS)

    Bertotti, Maria Letizia; Chattopadhyay, Amit K.; Modanese, Giovanni

    Starting from a class of stochastically driven kinetic models of economic exchange, here we present results highlighting the correlation of the Gini inequality index with the social mobility rate, close to dynamical equilibrium. Except for the "canonical-additive case", our numerical results consistently indicate negative values of the correlation coefficient, in agreement with empirical evidence. This confirms that growing inequality is not conducive to social mobility which then requires an "external source" to sustain its dynamics. On the other hand, the sign of the correlation between inequality and total income in the canonical ensemble depends on the way wealth enters or leaves the system. At a technical level, the approach involves a generalization of a stochastic dynamical system formulation, that further paves the way for a probabilistic formulation of perturbed economic exchange models.

  10. Radiation from quantum weakly dynamical horizons in loop quantum gravity.

    PubMed

    Pranzetti, Daniele

    2012-07-06

    We provide a statistical mechanical analysis of quantum horizons near equilibrium in the grand canonical ensemble. By matching the description of the nonequilibrium phase in terms of weakly dynamical horizons with a local statistical framework, we implement loop quantum gravity dynamics near the boundary. The resulting radiation process provides a quantum gravity description of the horizon evaporation. For large black holes, the spectrum we derive presents a discrete structure which could be potentially observable.

  11. Is quantum theory a form of statistical mechanics?

    NASA Astrophysics Data System (ADS)

    Adler, S. L.

    2007-05-01

    We give a review of the basic themes of my recent book: Adler S L 2004 Quantum Theory as an Emergent Phenomenon (Cambridge: Cambridge University Press). We first give motivations for considering the possibility that quantum mechanics is not exact, but is instead an accurate asymptotic approximation to a deeper level theory. For this deeper level, we propose a non-commutative generalization of classical mechanics, that we call "trace dynamics", and we give a brief survey of how it works, considering for simplicity only the bosonic case. We then discuss the statistical mechanics of trace dynamics and give our argument that with suitable approximations, the Ward identities for trace dynamics imply that ensemble averages in the canonical ensemble correspond to Wightman functions in quantum field theory. Thus, quantum theory emerges as the statistical thermodynamics of trace dynamics. Finally, we argue that Brownian motion corrections to this thermodynamics lead to stochastic corrections to the Schrödinger equation, of the type that have been much studied in the "continuous spontaneous localization" model of objective state vector reduction. In appendices to the talk, we give details of the existence of a conserved operator in trace dynamics that encodes the structure of the canonical algebra, of the derivation of the Ward identities, and of the proof that the stochastically-modified Schrödinger equation leads to state vector reduction with Born rule probabilities.

  12. Genetic code mutations: the breaking of a three billion year invariance.

    PubMed

    Mat, Wai-Kin; Xue, Hong; Wong, J Tze-Fei

    2010-08-20

    The genetic code has been unchanging for some three billion years in its canonical ensemble of encoded amino acids, as indicated by the universal adoption of this ensemble by all known organisms. Code mutations beginning with the encoding of 4-fluoro-Trp by Bacillus subtilis, initially replacing and eventually displacing Trp from the ensemble, first revealed the intrinsic mutability of the code. This has since been confirmed by a spectrum of other experimental code alterations in both prokaryotes and eukaryotes. To shed light on the experimental conversion of a rigidly invariant code to a mutating code, the present study examined code mutations determining the propagation of Bacillus subtilis on Trp and 4-, 5- and 6-fluoro-tryptophans. The results obtained with the mutants with respect to cross-inhibitions between the different indole amino acids, and the growth effects of individual nutrient withdrawals rendering essential their biosynthetic pathways, suggested that oligogenic barriers comprising sensitive proteins which malfunction with amino acid analogues provide effective mechanisms for preserving the invariance of the code through immemorial time, and mutations of these barriers open up the code to continuous change.

  13. California Drought and the 2015-2016 El Niño: Implications for Seasonal Forecasts

    NASA Astrophysics Data System (ADS)

    Cash, B.

    2017-12-01

    California winter rainfall is examined in observations and data from the North American Multi-Model Ensemble (NMME) and Project Metis, a new suite of seasonal integrations made using the operational European Centre for Medium-Range Weather Forecasts model. We focus on the 2015-2016 season, and the non-canonical response to the major El Niño event that occurred. We show that the Metis ensemble mean is capable of distinguishing between the response to the 1997/98 and 2015/16 events, while the two events are more similar in the NMME. We also show that unpredicted variations in the atmospheric circulation in the north Pacific significantly affect southern California rainfall totals. Improving prediction of these variations is thus a key target for improving seasonal rainfall predictions for this region.

  14. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE PAGES

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    2016-12-02

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  15. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  16. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  17. A multi-scale study of the adsorption of lanthanum on the (110) surface of tungsten

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samin, Adib J.; Zhang, Jinsuo

    In this study, we utilize a multi-scale approach to studying lanthanum adsorption on the (110) plane of tungsten. The energy of the system is described from density functional theory calculations within the framework of the cluster expansion method. It is found that including two-body figures up to the sixth nearest neighbor yielded a reasonable agreement with density functional theory calculations as evidenced by the reported cross validation score. The results indicate that the interaction between the adsorbate atoms in the adlayer is important and cannot be ignored. The parameterized cluster expansion expression is used in a lattice gas Monte Carlomore » simulation in the grand canonical ensemble at 773 K and the adsorption isotherm is recorded. Implications of the obtained results for the pyroprocessing application are discussed.« less

  18. Quantitative study of fluctuation effects by fast lattice Monte Carlo simulations: Compression of grafted homopolymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu

    2014-01-28

    Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamicsmore » (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the θ-solvent is also considered in some cases.« less

  19. Condensate statistics in interacting and ideal dilute bose gases

    PubMed

    Kocharovsky; Kocharovsky; Scully

    2000-03-13

    We obtain analytical formulas for the statistics, in particular, for the characteristic function and all cumulants, of the Bose-Einstein condensate in dilute weakly interacting and ideal equilibrium gases in the canonical ensemble via the particle-number-conserving operator formalism of Girardeau and Arnowitt. We prove that the ground-state occupation statistics is not Gaussian even in the thermodynamic limit. We calculate the effect of Bogoliubov coupling on suppression of ground-state occupation fluctuations and show that they are governed by a pair-correlation, squeezing mechanism.

  20. Thermostatted molecular dynamics: How to avoid the Toda demon hidden in Nose-Hoover dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.; Voter, A.F.; Ravelo, R.

    The Nose-Hoover thermostat, which is often used in the hope of modifying molecular dynamics trajectories in order to achieve canonical-ensemble averages, has hidden in it a Toda ``demon,`` which can give rise to unwanted, noncanonical undulations in the instantaneous kinetic temperature. We show how these long-lived oscillations arise from insufficient coupling of the thermostat to the atoms, and give straightforward, practical procedures for avoiding this weak-coupling pathology in isothermal molecular dynamics simulations.

  1. Thermodynamic instability of topological black holes in Gauss-Bonnet gravity with a generalized electrodynamics

    NASA Astrophysics Data System (ADS)

    Hendi, S. H.; Panahiyan, S.

    2014-12-01

    Motivated by the string corrections on the gravity and electrodynamics sides, we consider a quadratic Maxwell invariant term as a correction of the Maxwell Lagrangian to obtain exact solutions of higher dimensional topological black holes in Gauss-Bonnet gravity. We first investigate the asymptotically flat solutions and obtain conserved and thermodynamic quantities which satisfy the first law of thermodynamics. We also analyze thermodynamic stability of the solutions by calculating the heat capacity and the Hessian matrix. Then, we focus on horizon-flat solutions with an anti-de Sitter (AdS) asymptote and produce a rotating spacetime with a suitable transformation. In addition, we calculate the conserved and thermodynamic quantities for asymptotically AdS black branes which satisfy the first law of thermodynamics. Finally, we perform thermodynamic instability criterion to investigate the effects of nonlinear electrodynamics in canonical and grand canonical ensembles.

  2. Drude weight of the spin-(1)/(2) XXZ chain: Density matrix renormalization group versus exact diagonalization

    NASA Astrophysics Data System (ADS)

    Karrasch, C.; Hauschild, J.; Langer, S.; Heidrich-Meisner, F.

    2013-06-01

    We revisit the problem of the spin Drude weight D of the integrable spin-1/2 XXZ chain using two complementary approaches, exact diagonalization (ED) and the time-dependent density-matrix renormalization group (tDMRG). We pursue two main goals. First, we present extensive results for the temperature dependence of D. By exploiting time translation invariance within tDMRG, one can extract D for significantly lower temperatures than in previous tDMRG studies. Second, we discuss the numerical quality of the tDMRG data and elaborate on details of the finite-size scaling of the ED results, comparing calculations carried out in the canonical and grand-canonical ensembles. Furthermore, we analyze the behavior of the Drude weight as the point with SU(2)-symmetric exchange is approached and discuss the relative contribution of the Drude weight to the sum rule as a function of temperature.

  3. Fractal Structures and Scaling Laws in the Universe:. Statistical Mechanics of the Self-Gravitating Gas

    NASA Astrophysics Data System (ADS)

    de Vega, H. J.; Sánchez, N.; Combes, F.

    2000-09-01

    Fractal structures are observed in the universe in two very different ways. Firstly, in the gas forming the cold interstellar medium in scales from 10-4pc till l00pc. Secondly, the galaxy distribution has been observed to be fractal in scales up to hundreds of Mpc. We give here a short review of the statistical mechanical (and field theoretical) approach developed by us for the cold interstellar medium (ISM) and large structure of the universe. We consider a non-relativistic self-gravitating gas in thermal equilibrium at temperature T inside a volume V. The statistical mechanics of such system has special features and, as is known, the thermodynamical limit does not exist in its customary form. Moreover, the treatments through microcanonical, canonical and grand canonical ensembles yield different results. We present here for the first time the equation of state for the self-gravitating gas in the canonical ensemble. We find that it has the form p = [NT/V]f(η), where p is the pressure, N is the number of particles and η ≡ (Gm2 N)/(V1/3 T) The N → ∞ and V → ∞ limit exists keeping η fixed. We compute the function f(η) using Monte Carlo simulations and for small η, analytically. We compute the thermodynamic quantities of the system as free energy, entropy, chemical potential, specific heat, compressibility and speed of sound. We reproduce the well-known gravitational phase transition associated to the Jeans' instability. Namely, a gaseous phase for η < ηc and a condensed phase for η > ηc. Moreover, we derive the precise behaviour of the physical quantities near the transition. In particular, the pressure vanishes as p (ηc - η)B with B 0.2 and ηc 1.6 and the energy fluctuations diverge as (ηc - η)B-1. The speed of sound decreases monotonically with η and approaches the value √ {T/6} at the transition.

  4. Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Errington, Jeffrey R.

    2003-06-01

    An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.

  5. Forces and stress in second order Møller-Plesset perturbation theory for condensed phase systems within the resolution-of-identity Gaussian and plane waves approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Ben, Mauro, E-mail: mauro.delben@chem.uzh.ch; Hutter, Jürg, E-mail: hutter@chem.uzh.ch; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    The forces acting on the atoms as well as the stress tensor are crucial ingredients for calculating the structural and dynamical properties of systems in the condensed phase. Here, these derivatives of the total energy are evaluated for the second-order Møller-Plesset perturbation energy (MP2) in the framework of the resolution of identity Gaussian and plane waves method, in a way that is fully consistent with how the total energy is computed. This consistency is non-trivial, given the different ways employed to compute Coulomb, exchange, and canonical four center integrals, and allows, for example, for energy conserving dynamics in various ensembles.more » Based on this formalism, a massively parallel algorithm has been developed for finite and extended system. The designed parallel algorithm displays, with respect to the system size, cubic, quartic, and quintic requirements, respectively, for the memory, communication, and computation. All these requirements are reduced with an increasing number of processes, and the measured performance shows excellent parallel scalability and efficiency up to thousands of nodes. Additionally, the computationally more demanding quintic scaling steps can be accelerated by employing graphics processing units (GPU’s) showing, for large systems, a gain of almost a factor two compared to the standard central processing unit-only case. In this way, the evaluation of the derivatives of the RI-MP2 energy can be performed within a few minutes for systems containing hundreds of atoms and thousands of basis functions. With good time to solution, the implementation thus opens the possibility to perform molecular dynamics (MD) simulations in various ensembles (microcanonical ensemble and isobaric-isothermal ensemble) at the MP2 level of theory. Geometry optimization, full cell relaxation, and energy conserving MD simulations have been performed for a variety of molecular crystals including NH{sub 3}, CO{sub 2}, formic acid, and benzene.« less

  6. Reduction hybrid artifacts of EMG-EOG in electroencephalography evoked by prefrontal transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wan, Xiaohong; Zeng, Ke; Ni, Yinmei; Qiu, Lirong; Li, Xiaoli

    2016-12-01

    Objective. When prefrontal-transcranial magnetic stimulation (p-TMS) performed, it may evoke hybrid artifact mixed with muscle activity and blink activity in EEG recordings. Reducing this kind of hybrid artifact challenges the traditional preprocessing methods. We aim to explore method for the p-TMS evoked hybrid artifact removal. Approach. We propose a novel method used as independent component analysis (ICA) post processing to reduce the p-TMS evoked hybrid artifact. Ensemble empirical mode decomposition (EEMD) was used to decompose signal into multi-components, then the components were separated with artifact reduced by blind source separation (BSS) method. Three standard BSS methods, ICA, independent vector analysis, and canonical correlation analysis (CCA) were tested. Main results. Synthetic results showed that EEMD-CCA outperformed others as ICA post processing step in hybrid artifacts reduction. Its superiority was clearer when signal to noise ratio (SNR) was lower. In application to real experiment, SNR can be significantly increased and the p-TMS evoked potential could be recovered from hybrid artifact contaminated signal. Our proposed method can effectively reduce the p-TMS evoked hybrid artifacts. Significance. Our proposed method may facilitate future prefrontal TMS-EEG researches.

  7. Gravitational instability of slowly rotating isothermal spheres

    NASA Astrophysics Data System (ADS)

    Chavanis, P. H.

    2002-12-01

    We discuss the statistical mechanics of rotating self-gravitating systems by allowing properly for the conservation of angular momentum. We study analytically the case of slowly rotating isothermal spheres by expanding the solutions of the Boltzmann-Poisson equation in a series of Legendre polynomials, adapting the procedure introduced by Chandrasekhar (1933) for distorted polytropes. We show how the classical spiral of Lynden-Bell & Wood (1967) in the temperature-energy plane is deformed by rotation. We find that gravitational instability occurs sooner in the microcanonical ensemble and later in the canonical ensemble. According to standard turning point arguments, the onset of the collapse coincides with the minimum energy or minimum temperature state in the series of equilibria. Interestingly, it happens to be close to the point of maximum flattening. We generalize the singular isothermal solution to the case of a slowly rotating configuration. We also consider slowly rotating configurations of the self-gravitating Fermi gas at non-zero temperature.

  8. Effective Debye length in closed nanoscopic systems: a competition between two length scales.

    PubMed

    Tessier, Frédéric; Slater, Gary W

    2006-02-01

    The Poisson-Boltzmann equation (PBE) is widely employed in fields where the thermal motion of free ions is relevant, in particular in situations involving electrolytes in the vicinity of charged surfaces. The applications of this non-linear differential equation usually concern open systems (in osmotic equilibrium with an electrolyte reservoir, a semi-grand canonical ensemble), while solutions for closed systems (where the number of ions is fixed, a canonical ensemble) are either not appropriately distinguished from the former or are dismissed as a numerical calculation exercise. We consider herein the PBE for a confined, symmetric, univalent electrolyte and quantify how, in addition to the Debye length, its solution also depends on a second length scale, which embodies the contribution of ions by the surface (which may be significant in high surface-to-volume ratio micro- or nanofluidic capillaries). We thus establish that there are four distinct regimes for such systems, corresponding to the limits of the two parameters. We also show how the PBE in this case can be formulated in a familiar way by simply replacing the traditional Debye length by an effective Debye length, the value of which is obtained numerically from conservation conditions. But we also show that a simple expression for the value of the effective Debye length, obtained within a crude approximation, remains accurate even as the system size is reduced to nanoscopic dimensions, and well beyond the validity range typically associated with the solution of the PBE.

  9. The interplay of intrinsic disorder and macromolecular crowding on α-synuclein fibril formation

    NASA Astrophysics Data System (ADS)

    Shirai, Nobu C.; Kikuchi, Macoto

    2016-02-01

    α-synuclein (α-syn) is an intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of α-syn is induced not only by increases in α-syn concentration but also by macromolecular crowding. In order to investigate the coupled effect of the intrinsic disorder of α-syn and macromolecular crowding, we construct a lattice gas model of α-syn in contact with a crowding agent reservoir based on statistical mechanics. The main assumption is that α-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states are modeled by larger particles with larger internal entropy than other states. Thanks to the simplicity of the model, we can exactly calculate the number of conformations of crowding agents, and this enables us to prove that the original grand canonical ensemble with a crowding agent reservoir is mathematically equivalent to a canonical ensemble without crowding agents. In this expression, the effect of macromolecular crowding is absorbed in the internal entropy of disordered states; it is clearly shown that the crowding effect reduces the internal entropy. Based on Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of α-syn, and suggest that macromolecular crowding is the key to resolving the controversy.

  10. Molecular Simulation of the Phase Diagram of Methane Hydrate: Free Energy Calculations, Direct Coexistence Method, and Hyperparallel Tempering.

    PubMed

    Jin, Dongliang; Coasne, Benoit

    2017-10-24

    Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.

  11. Superfluid-insulator transitions of two-species bosons in an optical lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isacsson, A.; Department of Physics, Yale University, P.O. Box 208120, New Haven, Connecticut 06520-8120; Cha, M.-C.

    2005-11-01

    We consider the two-species bosonic Hubbard model with variable interspecies interaction and hopping strength in the grand canonical ensemble with a common chemical potential. We analyze the superfluid-insulator (SI) transition for the relevant parameter regimes and compute the ground state phase diagram in the vicinity of odd filling Mott states. We find that the superfluid-insulator transition occurs with (a) simultaneous onset of superfluidity of both species or (b) coexistence of Mott insulating state of one species and superfluidity of the other or, in the case of unit filling (c) complete depopulation of one species. The superfluid-insulator transition can be firstmore » order in a large region of the phase diagram. We develop a variational mean-field method which takes into account the effect of second order quantum fluctuations on the superfluid-insulator transition and corroborate the mean-field phase diagram using a quantum Monte Carlo study.« less

  12. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  13. Phases of global AdS black holes

    NASA Astrophysics Data System (ADS)

    Basu, Pallab; Krishnan, Chethan; Subramanian, P. N. Bala

    2016-06-01

    We study the phases of gravity coupled to a charged scalar and gauge field in an asymptotically Anti-de Sitter spacetime ( AdS 4) in the grand canonical ensemble. For the conformally coupled scalar, an intricate phase diagram is charted out between the four relevant solutions: global AdS, boson star, Reissner-Nordstrom black hole and the hairy black hole. The nature of the phase diagram undergoes qualitative changes as the charge of the scalar is changed, which we discuss. We also discuss the new features that arise in the extremal limit.

  14. Dilational symmetry-breaking in thermodynamics

    NASA Astrophysics Data System (ADS)

    Lin, Chris L.; Ordóñez, Carlos R.

    2017-04-01

    Using thermodynamic relations and dimensional analysis we derive a general formula for the thermodynamical trace 2{ E}-DP for nonrelativistic systems and { E}-DP for relativistic systems, where D is the number of spatial dimensions, in terms of the microscopic scales of the system within the grand canonical ensemble. We demonstrate the formula for several cases, including anomalous systems which develop scales through dimensional transmutation. Using this relation, we make explicit the connection between dimensional analysis and the virial theorem. This paper is focused mainly on the non-relativistic aspects of this relation.

  15. First-Order Interfacial Transformations with a Critical Point: Breaking the Symmetry at a Symmetric Tilt Grain Boundary

    NASA Astrophysics Data System (ADS)

    Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian

    2018-02-01

    First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.

  16. Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits

    NASA Astrophysics Data System (ADS)

    Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas

    2018-04-01

    We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.

  17. Structure and energy of non-canonical basepairs: comparison of various computational chemistry methods with crystallographic ensembles.

    PubMed

    Panigrahi, Swati; Pal, Rahul; Bhattacharyya, Dhananjay

    2011-12-01

    Different types of non-canonical basepairs, in addition to the Watson-Crick ones, are observed quite frequently in RNA. Their importance in the three dimensional structure is not fully understood, but their various roles have been proposed by different groups. We have analyzed the energetics and geometry of 32 most frequently observed basepairs in the functional RNA crystal structures using different popular empirical, semi-empirical and ab initio quantum chemical methods and compared their optimized geometry with the crystal data. These basepairs are classified into three categories: polar, non-polar and sugar-mediated, depending on the types of atoms involved in hydrogen bonding. In case of polar basepairs, most of the methods give rise to optimized structures close to their initial geometry. The interaction energies also follow similar trends, with the polar ones having more attractive interaction energies. Some of the C-H...O/N hydrogen bond mediated non-polar basepairs are also found to be significantly stable in terms of their interaction energy values. Few polar basepairs, having amino or carboxyl groups not hydrogen bonded to anything, such as G:G H:W C, show large flexibility. Most of the non-polar basepairs, except A:G s:s T and A:G w:s C, are found to be stable; indicating C-H...O/N interaction also plays a prominent role in stabilizing the basepairs. The sugar mediated basepairs show variability in their structures, due to the involvement of flexible ribose sugar. These presumably indicate that the most of the polar basepairs along with few non-polar ones act as seed for RNA folding while few may act as some conformational switch in the RNA.

  18. Methane adsorption in nanoporous carbon: the numerical estimation of optimal storage conditions

    NASA Astrophysics Data System (ADS)

    Ortiz, L.; Kuchta, B.; Firlej, L.; Roth, M. W.; Wexler, C.

    2016-05-01

    The efficient storage and transportation of natural gas is one of the most important enabling technologies for use in energy applications. Adsorption in porous systems, which will allow the transportation of high-density fuel under low pressure, is one of the possible solutions. We present and discuss extensive grand canonical Monte Carlo (GCMC) simulation results of the adsorption of methane into slit-shaped graphitic pores of various widths (between 7 Å and 50 Å), and at pressures P between 0 bar and 360 bar. Our results shed light on the dependence of film structure on pore width and pressure. For large widths, we observe multi-layer adsorption at supercritical conditions, with excess amounts even at large distances from the pore walls originating from the attractive interaction exerted by a very high-density film in the first layer. We are also able to successfully model the experimental adsorption isotherms of heterogeneous activated carbon samples by means of an ensemble average of the pore widths, based exclusively on the pore-size distributions (PSD) calculated from subcritical nitrogen adsorption isotherms. Finally, we propose a new formula, based on the PSD ensemble averages, to calculate the isosteric heat of adsorption of heterogeneous systems from single-pore-width calculations. The methods proposed here will contribute to the rational design and optimization of future adsorption-based storage tanks.

  19. An Information Theory Approach to Nonlinear, Nonequilibrium Thermodynamics

    NASA Astrophysics Data System (ADS)

    Rogers, David M.; Beck, Thomas L.; Rempe, Susan B.

    2011-10-01

    Using the problem of ion channel thermodynamics as an example, we illustrate the idea of building up complex thermodynamic models by successively adding physical information. We present a new formulation of information algebra that generalizes methods of both information theory and statistical mechanics. From this foundation we derive a theory for ion channel kinetics, identifying a nonequilibrium `process' free energy functional in addition to the well-known integrated work functionals. The Gibbs-Maxwell relation for the free energy functional is a Green-Kubo relation, applicable arbitrarily far from equilibrium, that captures the effect of non-local and time-dependent behavior from transient thermal and mechanical driving forces. Comparing the physical significance of the Lagrange multipliers to the canonical ensemble suggests definitions of nonequilibrium ensembles at constant capacitance or inductance in addition to constant resistance. Our result is that statistical mechanical descriptions derived from a few primitive algebraic operations on information can be used to create experimentally-relevant and computable models. By construction, these models may use information from more detailed atomistic simulations. Two surprising consequences to be explored in further work are that (in)distinguishability factors are automatically predicted from the problem formulation and that a direct analogue of the second law for thermodynamic entropy production is found by considering information loss in stochastic processes. The information loss identifies a novel contribution from the instantaneous information entropy that ensures non-negative loss.

  20. A Canonical Repsonse of Precipitation Characteristics to Global Warming from CMIP5 Models

    NASA Technical Reports Server (NTRS)

    Lau, William K.-M.; Wu, H.-T.; Kim, K.-M.

    2013-01-01

    In this study, we find from analyses of projections of 14 CMIP5 models a robust, canonical global response in rainfall characteristics to a warming climate. Under a scenario of 1% increase per year of CO2 emission, the model ensemble projects globally more heavy precipitation (+7+/-2.4%/K1), less moderate precipitation (-2.5+/-0.6%/K), more light precipitation (+1.8+/-1.3%/K1), and increased length of dry (no-rain) periods (+4.7+/-2.1%/K). Regionally, a majority of the models project a consistent response with more heavy precipitation over climatologically wet regions of the deep tropics, especially the equatorial Pacific Ocean and the Asian monsoon regions, and more dry periods over the land areas of the subtropics and the tropical marginal convective zones. Our results suggest that increased CO2 emissions induce a global adjustment in circulation and moisture availability manifested in basic changes in global precipitation characteristics, including increasing risks of severe floods and droughts in preferred geographic locations worldwide.

  1. Defect topologies in chiral liquid crystals confined to mesoscopic channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlotthauer, Sergej, E-mail: s.schlotthauer@mailbox.tu-berlin.de; Skutnik, Robert A.; Stieger, Tillmann

    2015-05-21

    We present Monte Carlo simulations in the grand canonical and canonical ensembles of a chiral liquid crystal confined to mesochannels of variable sizes and geometries. The mesochannels are taken to be quasi-infinite in one dimension but finite in the two other directions. Under thermodynamic conditions chosen and for a selected value of the chirality coupling constant, the bulk liquid crystal exhibits structural characteristics of a blue phase II. This is established through the tetrahedral symmetry of disclination lines and the characteristic simple-cubic arrangement of double-twist helices formed by the liquid-crystal molecules along all three axes of a Cartesian coordinate system.more » If the blue phase II is then exposed to confinement, the interplay between its helical structure, various anchoring conditions at the walls of the mesochannels, and the shape of the mesochannels gives rise to a broad variety of novel, qualitative disclination-line structures that are reported here for the first time.« less

  2. Matrix quantum mechanics on S1 /Z2

    NASA Astrophysics Data System (ADS)

    Betzios, P.; Gürsoy, U.; Papadoulaki, O.

    2018-03-01

    We study Matrix Quantum Mechanics on the Euclidean time orbifold S1 /Z2. Upon Wick rotation to Lorentzian time and taking the double-scaling limit this theory provides a toy model for a big-bang/big crunch universe in two dimensional non-critical string theory where the orbifold fixed points become cosmological singularities. We derive the MQM partition function both in the canonical and grand canonical ensemble in two different formulations and demonstrate agreement between them. We pinpoint the contribution of twisted states in both of these formulations either in terms of bi-local operators acting at the end-points of time or branch-cuts on the complex plane. We calculate, in the matrix model, the contribution of the twisted states to the torus level partition function explicitly and show that it precisely matches the world-sheet result, providing a non-trivial test of the proposed duality. Finally we discuss some interesting features of the partition function and the possibility of realising it as a τ-function of an integrable hierarchy.

  3. Relevance of Bose-Einstein condensation to the interference of two independent Bose gases

    NASA Astrophysics Data System (ADS)

    Iazzi, Mauro; Yuasa, Kazuya

    2011-03-01

    Interference of two independently prepared ideal Bose gases is discussed, on the basis of the idea of measurement-induced interference. It is known that, even if the number of atoms in each gas is individually fixed finite and the symmetry of the system is not broken, an interference pattern is observed on each single snapshot. The key role is played by the Hanbury Brown and Twiss effect, which leads to an oscillating pattern of the cloud of identical atoms. Then, how essential is the Bose-Einstein condensation to the interference? In this work, we describe two ideal Bose gases trapped in two separate three-dimensional harmonic traps at a finite temperature T, using the canonical ensembles (with fixed numbers of atoms). We compute the full statistics of the snapshot profiles of the expanding and overlapping gases released from the traps. We obtain a simple formula valid for finite T, which shows that the average fringe spectrum (average fringe contrast) is given by the purity of each gas. The purity is known to be a good measure of condensation, and the formula clarifies the relevance of the condensation to the interference. The results for T=0, previously known in the literature, can be recovered from our analysis. The fluctuation of the interference spectrum is also studied, and it is shown that the fluctuation is vanishingly small only below the critical temperature Tc, meaning that interference pattern is certainly observed on every snapshot below Tc. The fact that the number of atoms is fixed in the canonical ensemble is crucial to this vanishing fluctuation.

  4. Measuring excess free energies of self-assembled membrane structures.

    PubMed

    Norizoe, Yuki; Daoulas, Kostas Ch; Müller, Marcus

    2010-01-01

    Using computer simulation of a solvent-free, coarse-grained model for amphiphilic membranes, we study the excess free energy of hourglass-shaped connections (i.e., stalks) between two apposed bilayer membranes. In order to calculate the free energy by simulation in the canonical ensemble, we reversibly transfer two apposed bilayers into a configuration with a stalk in three steps. First, we gradually replace the intermolecular interactions by an external, ordering field. The latter is chosen such that the structure of the non-interacting system in this field closely resembles the structure of the original, interacting system in the absence of the external field. The absence of structural changes along this path suggests that it is reversible; a fact which is confirmed by expanded-ensemble simulations. Second, the external, ordering field is changed as to transform the non-interacting system from the apposed bilayer structure to two-bilayers connected by a stalk. The final external field is chosen such that the structure of the non-interacting system resembles the structure of the stalk in the interacting system without a field. On the third branch of the transformation path, we reversibly replace the external, ordering field by non-bonded interactions. Using expanded-ensemble techniques, the free energy change along this reversible path can be obtained with an accuracy of 10(-3)k(B)T per molecule in the n VT-ensemble. Calculating the chemical potential, we obtain the free energy of a stalk in the grandcanonical ensemble, and employing semi-grandcanonical techniques, we calculate the change of the excess free energy upon altering the molecular architecture. This computational strategy can be applied to compute the free energy of self-assembled phases in lipid and copolymer systems, and the excess free energy of defects or interfaces.

  5. Molecular dynamic simulations of selective self-diffusion of CH4/CO2/H2O/N2 in coal

    NASA Astrophysics Data System (ADS)

    Song, Y.; Jiang, B.; Li, F. L.

    2017-06-01

    The self-diffusion coefficients (D) of CH4/CO2/H2O/N2 at a relatively broad range of temperatures(298.15∼ 458.15K)and pressures (1∼6MPa) under the NPT, NPH, NVE, and NVT ensembles were obtained after the calculations of molecular mechanics(MM), annealing kinetics(AK), giant canonical Monte Carlo(GCMC), and molecular dynamics (MD) based on Wiser bituminous coal model (WM). The Ds of the adsorbates at the saturated adsorption configurations are D CH4418K. The average swelling ratios manifest as H2O (14.7∼35.18%)>CO2 (13.38∼32.25%)>CH4 (15.35∼23.71%)> N2 (11.47∼22.14%) (NPH, 1∼6MPa). There exits differences in D, swelling ratios and E among various ensembles, indicating that the selection of ensembles has an important influence on the MD calculations for self-diffusion coefficients.

  6. Efficient algorithms for probing the RNA mutation landscape.

    PubMed

    Waldispühl, Jérôme; Devadas, Srinivas; Berger, Bonnie; Clote, Peter

    2008-08-08

    The diversity and importance of the role played by RNAs in the regulation and development of the cell are now well-known and well-documented. This broad range of functions is achieved through specific structures that have been (presumably) optimized through evolution. State-of-the-art methods, such as McCaskill's algorithm, use a statistical mechanics framework based on the computation of the partition function over the canonical ensemble of all possible secondary structures on a given sequence. Although secondary structure predictions from thermodynamics-based algorithms are not as accurate as methods employing comparative genomics, the former methods are the only available tools to investigate novel RNAs, such as the many RNAs of unknown function recently reported by the ENCODE consortium. In this paper, we generalize the McCaskill partition function algorithm to sum over the grand canonical ensemble of all secondary structures of all mutants of the given sequence. Specifically, our new program, RNAmutants, simultaneously computes for each integer k the minimum free energy structure MFE(k) and the partition function Z(k) over all secondary structures of all k-point mutants, even allowing the user to specify certain positions required not to mutate and certain positions required to base-pair or remain unpaired. This technically important extension allows us to study the resilience of an RNA molecule to pointwise mutations. By computing the mutation profile of a sequence, a novel graphical representation of the mutational tendency of nucleotide positions, we analyze the deleterious nature of mutating specific nucleotide positions or groups of positions. We have successfully applied RNAmutants to investigate deleterious mutations (mutations that radically modify the secondary structure) in the Hepatitis C virus cis-acting replication element and to evaluate the evolutionary pressure applied on different regions of the HIV trans-activation response element. In particular, we show qualitative agreement between published Hepatitis C and HIV experimental mutagenesis studies and our analysis of deleterious mutations using RNAmutants. Our work also predicts other deleterious mutations, which could be verified experimentally. Finally, we provide evidence that the 3' UTR of the GB RNA virus C has been optimized to preserve evolutionarily conserved stem regions from a deleterious effect of pointwise mutations. We hope that there will be long-term potential applications of RNAmutants in de novo RNA design and drug design against RNA viruses. This work also suggests potential applications for large-scale exploration of the RNA sequence-structure network. Binary distributions are available at http://RNAmutants.csail.mit.edu/.

  7. Ensemble training to improve recognition using 2D ear

    NASA Astrophysics Data System (ADS)

    Middendorff, Christopher; Bowyer, Kevin W.

    2009-05-01

    The ear has gained popularity as a biometric feature due to the robustness of the shape over time and across emotional expression. Popular methods of ear biometrics analyze the ear as a whole, leaving these methods vulnerable to error due to occlusion. Many researchers explore ear recognition using an ensemble, but none present a method for designing the individual parts that comprise the ensemble. In this work, we introduce a method of modifying the ensemble shapes to improve performance. We determine how different properties of an ensemble training system can affect overall performance. We show that ensembles built from small parts will outperform ensembles built with larger parts, and that incorporating a large number of parts improves the performance of the ensemble.

  8. Energetic investigation of the adsorption process of CH4, C2H6 and N2 on activated carbon: Numerical and statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb

    2014-01-01

    The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.

  9. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  10. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deffner, Sebastian; Zurek, Wojciech H.

    Envariance—entanglement assisted invariance—is a recently discovered symmetry of composite quantum systems. Here, we show that thermodynamic equilibrium states are fully characterized by their envariance. In particular, the microcanonical equilibrium of a systemmore » $${ \\mathcal S }$$ with Hamiltonian $${H}_{{ \\mathcal S }}$$ is a fully energetically degenerate quantum state envariant under every unitary transformation. A representation of the canonical equilibrium then follows from simply counting degenerate energy states. Finally, our conceptually novel approach is free of mathematically ambiguous notions such as ensemble, randomness, etc., and, while it does not even rely on probability, it helps to understand its role in the quantum world.« less

  12. Small black holes in global AdS spacetime

    NASA Astrophysics Data System (ADS)

    Jokela, Niko; Pönni, Arttu; Vuorinen, Aleksi

    2016-04-01

    We study the properties of two-point functions and quasinormal modes in a strongly coupled field theory holographically dual to a small black hole in global anti-de Sitter spacetime. Our results are seen to smoothly interpolate between known limits corresponding to large black holes and thermal AdS space, demonstrating that the Son-Starinets prescription works even when there is no black hole in the spacetime. Omitting issues related to the internal space, the results can be given a field theory interpretation in terms of the microcanonical ensemble, which provides access to energy densities forbidden in the canonical description.

  13. Radiation of quantum black holes and modified uncertainty relation

    NASA Astrophysics Data System (ADS)

    Kamali, A. D.; Pedram, P.

    In this paper, using a deformed algebra [X,P] = iℏ/(1 ‑ λ2P2) which is originated from various theories of gravity, we study thermodynamical properties of quantum black holes (BHs) in canonical ensembles. We exactly calculate the modified internal energy, entropy and heat capacity. Moreover, we investigate a tunneling mechanism of massless particle in phase space. In this regard, the tunneling radiation of BH receives new corrections and the exact radiant spectrum is no longer precisely thermal. In addition, we show that our results are compatible with other quantum gravity (QG) approaches.

  14. Phase Transitions in a Model of Y-Molecules Abstract

    NASA Astrophysics Data System (ADS)

    Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James

    Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.

  15. Canonical methods in classical and quantum gravity: An invitation to canonical LQG

    NASA Astrophysics Data System (ADS)

    Reyes, Juan D.

    2018-04-01

    Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.

  16. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  17. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  18. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  19. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  20. Compositions and methods for the expression of selenoproteins in eukaryotic cells

    DOEpatents

    Gladyshev, Vadim [Lincoln, NE; Novoselov, Sergey [Puschino, RU

    2012-09-25

    Recombinant nucleic acid constructs for the efficient expression of eukaryotic selenoproteins and related methods for production of recombinant selenoproteins are provided. The nucleic acid constructs comprise novel selenocysteine insertion sequence (SECIS) elements. Certain novel SECIS elements of the invention contain non-canonical quartet sequences. Other novel SECIS elements provided by the invention are chimeric SECIS elements comprising a canonical SECIS element that contains a non-canonical quartet sequence and chimeric SECIS elements comprising a non-canonical SECIS element that contains a canonical quartet sequence. The novel SECIS elements of the invention facilitate the insertion of selenocysteine residues into recombinant polypeptides.

  1. New approach to canonical partition functions computation in Nf=2 lattice QCD at finite baryon density

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.

    2017-05-01

    We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.

  2. Molecular activity prediction by means of supervised subspace projection based ensembles of classifiers.

    PubMed

    Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á

    2018-03-01

    This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.

  3. Functional Multiple-Set Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  4. Ensemble Methods

    NASA Astrophysics Data System (ADS)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.

  5. Absence of ballistic charge transport in the half-filled 1D Hubbard model

    NASA Astrophysics Data System (ADS)

    Carmelo, J. M. P.; Nemati, S.; Prosen, T.

    2018-05-01

    Whether in the thermodynamic limit of lattice length L → ∞, hole concentration mηz = - 2 Sηz/L = 1 -ne → 0, nonzero temperature T > 0, and U / t > 0 the charge stiffness of the 1D Hubbard model with first neighbor transfer integral t and on-site repulsion U is finite or vanishes and thus whether there is or there is no ballistic charge transport, respectively, remains an unsolved and controversial issue, as different approaches yield contradictory results. (Here Sηz = - (L -Ne) / 2 is the η-spin projection and ne =Ne / L the electronic density.) In this paper we provide an upper bound on the charge stiffness and show that (similarly as at zero temperature), for T > 0 and U / t > 0 it vanishes for mηz → 0 within the canonical ensemble in the thermodynamic limit L → ∞. Moreover, we show that at high temperature T → ∞ the charge stiffness vanishes as well within the grand-canonical ensemble for L → ∞ and chemical potential μ →μu where (μ -μu) ≥ 0 and 2μu is the Mott-Hubbard gap. The lack of charge ballistic transport indicates that charge transport at finite temperatures is dominated by a diffusive contribution. Our scheme uses a suitable exact representation of the electrons in terms of rotated electrons for which the numbers of singly occupied and doubly occupied lattice sites are good quantum numbers for U / t > 0. In contrast to often less controllable numerical studies, the use of such a representation reveals the carriers that couple to the charge probes and provides useful physical information on the microscopic processes behind the exotic charge transport properties of the 1D electronic correlated system under study.

  6. On extending Kohn-Sham density functionals to systems with fractional number of electrons.

    PubMed

    Li, Chen; Lu, Jianfeng; Yang, Weitao

    2017-06-07

    We analyze four ways of formulating the Kohn-Sham (KS) density functionals with a fractional number of electrons, through extending the constrained search space from the Kohn-Sham and the generalized Kohn-Sham (GKS) non-interacting v-representable density domain for integer systems to four different sets of densities for fractional systems. In particular, these density sets are (I) ensemble interacting N-representable densities, (II) ensemble non-interacting N-representable densities, (III) non-interacting densities by the Janak construction, and (IV) non-interacting densities whose composing orbitals satisfy the Aufbau occupation principle. By proving the equivalence of the underlying first order reduced density matrices associated with these densities, we show that sets (I), (II), and (III) are equivalent, and all reduce to the Janak construction. Moreover, for functionals with the ensemble v-representable assumption at the minimizer, (III) reduces to (IV) and thus justifies the previous use of the Aufbau protocol within the (G)KS framework in the study of the ground state of fractional electron systems, as defined in the grand canonical ensemble at zero temperature. By further analyzing the Aufbau solution for different density functional approximations (DFAs) in the (G)KS scheme, we rigorously prove that there can be one and only one fractional occupation for the Hartree Fock functional, while there can be multiple fractional occupations for general DFAs in the presence of degeneracy. This has been confirmed by numerical calculations using the local density approximation as a representative of general DFAs. This work thus clarifies important issues on density functional theory calculations for fractional electron systems.

  7. Elastic constants from microscopic strain fluctuations

    PubMed

    Sengupta; Nielaba; Rao; Binder

    2000-02-01

    Fluctuations of the instantaneous local Lagrangian strain epsilon(ij)(r,t), measured with respect to a static "reference" lattice, are used to obtain accurate estimates of the elastic constants of model solids from atomistic computer simulations. The measured strains are systematically coarse-grained by averaging them within subsystems (of size L(b)) of a system (of total size L) in the canonical ensemble. Using a simple finite size scaling theory we predict the behavior of the fluctuations as a function of L(b)/L and extract elastic constants of the system in the thermodynamic limit at nonzero temperature. Our method is simple to implement, efficient, and general enough to be able to handle a wide class of model systems, including those with singular potentials without any essential modification. We illustrate the technique by computing isothermal elastic constants of "hard" and "soft" disk triangular solids in two dimensions from Monte Carlo and molecular dynamics simulations. We compare our results with those from earlier simulations and theory.

  8. A maximum entropy thermodynamics of small systems.

    PubMed

    Dixit, Purushottam D

    2013-05-14

    We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.

  9. Spatial Analysis and Quantification of the Thermodynamic Driving Forces in Protein-Ligand Binding: Binding Site Variability

    PubMed Central

    Raman, E. Prabhu; MacKerell, Alexander D.

    2015-01-01

    The thermodynamic driving forces behind small molecule-protein binding are still not well understood, including the variability of those forces associated with different types of ligands in different binding pockets. To better understand these phenomena we calculate spatially resolved thermodynamic contributions of the different molecular degrees of freedom for the binding of propane and methanol to multiple pockets on the proteins Factor Xa and p38 MAP kinase. Binding thermodynamics are computed using a statistical thermodynamics based end-point method applied on a canonical ensemble comprising the protein-ligand complexes and the corresponding free states in an explicit solvent environment. Energetic and entropic contributions of water and ligand degrees of freedom computed from the configurational ensemble provides an unprecedented level of detail into the mechanisms of binding. Direct protein-ligand interaction energies play a significant role in both non-polar and polar binding, which is comparable to water reorganization energy. Loss of interactions with water upon binding strongly compensates these contributions leading to relatively small binding enthalpies. For both solutes, the entropy of water reorganization is found to favor binding in agreement with the classical view of the “hydrophobic effect”. Depending on the specifics of the binding pocket, both energy-entropy compensation and reinforcement mechanisms are observed. Notable is the ability to visualize the spatial distribution of the thermodynamic contributions to binding at atomic resolution showing significant differences in the thermodynamic contributions of water to the binding of propane versus methanol. PMID:25625202

  10. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  11. Local and linear chemical reactivity response functions at finite temperature in density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franco-Pérez, Marco, E-mail: francopj@mcmaster.ca, E-mail: ayers@mcmaster.ca, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D.F. 09340; Ayers, Paul W., E-mail: francopj@mcmaster.ca, E-mail: ayers@mcmaster.ca, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx

    2015-12-28

    We explore the local and nonlocal response functions of the grand canonical potential density functional at nonzero temperature. In analogy to the zero-temperature treatment, local (e.g., the average electron density and the local softness) and nonlocal (e.g., the softness kernel) intrinsic response functions are defined as partial derivatives of the grand canonical potential with respect to its thermodynamic variables (i.e., the chemical potential of the electron reservoir and the external potential generated by the atomic nuclei). To define the local and nonlocal response functions of the electron density (e.g., the Fukui function, the linear density response function, and the dualmore » descriptor), we differentiate with respect to the average electron number and the external potential. The well-known mathematical relationships between the intrinsic response functions and the electron-density responses are generalized to nonzero temperature, and we prove that in the zero-temperature limit, our results recover well-known identities from the density functional theory of chemical reactivity. Specific working equations and numerical results are provided for the 3-state ensemble model.« less

  12. Local and linear chemical reactivity response functions at finite temperature in density functional theory.

    PubMed

    Franco-Pérez, Marco; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2015-12-28

    We explore the local and nonlocal response functions of the grand canonical potential density functional at nonzero temperature. In analogy to the zero-temperature treatment, local (e.g., the average electron density and the local softness) and nonlocal (e.g., the softness kernel) intrinsic response functions are defined as partial derivatives of the grand canonical potential with respect to its thermodynamic variables (i.e., the chemical potential of the electron reservoir and the external potential generated by the atomic nuclei). To define the local and nonlocal response functions of the electron density (e.g., the Fukui function, the linear density response function, and the dual descriptor), we differentiate with respect to the average electron number and the external potential. The well-known mathematical relationships between the intrinsic response functions and the electron-density responses are generalized to nonzero temperature, and we prove that in the zero-temperature limit, our results recover well-known identities from the density functional theory of chemical reactivity. Specific working equations and numerical results are provided for the 3-state ensemble model.

  13. Hairy black holes and the endpoint of AdS4 charged superradiance

    NASA Astrophysics Data System (ADS)

    Dias, Óscar J. C.; Masachs, Ramon

    2017-02-01

    We construct hairy black hole solutions that merge with the anti-de Sitter (AdS4) Reissner-Nordström black hole at the onset of superradiance. These hairy black holes have, for a given mass and charge, higher entropy than the corresponding AdS4-Reissner-Nordström black hole. Therefore, they are natural candidates for the endpoint of the charged superradiant instability. On the other hand, hairy black holes never dominate the canonical and grand-canonical ensembles. The zero-horizon radius of the hairy black holes is a soliton (i.e. a boson star under a gauge transformation). We construct our solutions perturbatively, for small mass and charge, so that the properties of hairy black holes can be used to testify and compare with the endpoint of initial value simulations. We further discuss the near-horizon scalar condensation instability which is also present in global AdS4-Reissner-Nordström black holes. We highlight the different nature of the near-horizon and superradiant instabilities and that hairy black holes ultimately exist because of the non-linear instability of AdS.

  14. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  15. In silico concurrent multisite pH titration in proteins.

    PubMed

    Hu, Hao; Shen, Lin

    2014-07-30

    The concurrent proton binding at multiple sites in macromolecules such as proteins and nucleic acids is an important yet challenging problem in biochemistry. We develop an efficient generalized Hamiltonian approach to attack this issue. Based on the previously developed generalized-ensemble methods, an effective potential energy is constructed which combines the contributions of all (relevant) protonation states of the molecule. The effective potential preserves important phase regions of all states and, thus, allows efficient sampling of these regions in one simulation. The need for intermediate states in alchemical free energy simulations is greatly reduced. Free energy differences between different protonation states can be determined accurately and enable one to construct the grand canonical partition function. Therefore, the complicated concurrent multisite proton titration process of protein molecules can be satisfactorily simulated. Application of this method to the simulation of the pKa of Glu49, Asp50, and C-terminus of bovine pancreatic trypsin inhibitor shows reasonably good agreement with published experimental work. This method provides an unprecedented vivid picture of how different protonation states change their relative population upon pH titration. We believe that the method will be very useful in deciphering the molecular mechanism of pH-dependent biomolecular processes in terms of a detailed atomistic description. Copyright © 2014 Wiley Periodicals, Inc.

  16. Method and apparatus for quantum information processing using entangled neutral-atom qubits

    DOEpatents

    Jau, Yuan Yu; Biedermann, Grant; Deutsch, Ivan

    2018-04-03

    A method for preparing an entangled quantum state of an atomic ensemble is provided. The method includes loading each atom of the atomic ensemble into a respective optical trap; placing each atom of the atomic ensemble into a same first atomic quantum state by impingement of pump radiation; approaching the atoms of the atomic ensemble to within a dipole-dipole interaction length of each other; Rydberg-dressing the atomic ensemble; during the Rydberg-dressing operation, exciting the atomic ensemble with a Raman pulse tuned to stimulate a ground-state hyperfine transition from the first atomic quantum state to a second atomic quantum state; and separating the atoms of the atomic ensemble by more than a dipole-dipole interaction length.

  17. Statistical Mining of Predictability of Seasonal Precipitation over the United States

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Kim, Kyu-Myong; Shen, S. P.

    2001-01-01

    Results from a new ensemble canonical correlation (ECC) prediction model yield a remarkable (10-20%) increases in baseline prediction skills for seasonal precipitation over the US for all seasons, compared to traditional statistical predictions. While the tropical Pacific, i.e., El Nino, contributes to the largest share of potential predictability in the southern tier States during boreal winter, the North Pacific and the North Atlantic are responsible for enhanced predictability in the northern Great Plains, Midwest and the southwest US during boreal summer. Most importantly, ECC significantly reduces the spring predictability barrier over the conterminous US, thereby raising the skill bar for dynamical predictions.

  18. Mixed-order phase transition in a minimal, diffusion-based spin model.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  19. Foundations of statistical mechanics from symmetries of entanglement

    DOE PAGES

    Deffner, Sebastian; Zurek, Wojciech H.

    2016-06-09

    Envariance—entanglement assisted invariance—is a recently discovered symmetry of composite quantum systems. Here, we show that thermodynamic equilibrium states are fully characterized by their envariance. In particular, the microcanonical equilibrium of a systemmore » $${ \\mathcal S }$$ with Hamiltonian $${H}_{{ \\mathcal S }}$$ is a fully energetically degenerate quantum state envariant under every unitary transformation. A representation of the canonical equilibrium then follows from simply counting degenerate energy states. Finally, our conceptually novel approach is free of mathematically ambiguous notions such as ensemble, randomness, etc., and, while it does not even rely on probability, it helps to understand its role in the quantum world.« less

  20. Mixing properties of the one-atom maser

    NASA Astrophysics Data System (ADS)

    Bruneau, Laurent

    2014-06-01

    We study the relaxation properties of the quantized electromagnetic field in a cavity under repeated interactions with single two-level atoms, so-called one-atom maser. We improve the ergodic results obtained in Bruneau and Pillet (J Stat Phys 134(5-6):1071-1095, 2009) and prove that, whenever the atoms are initially distributed according to the canonical ensemble at temperature , all the invariant states are mixing. Under some non-resonance condition this invariant state is known to be thermal equilibirum at some renormalized temperature and we prove that the mixing is then arbitrarily slow, in other words that there is no lower bound on the relaxation speed.

  1. Extended Hamiltonian approach to continuous tempering

    NASA Astrophysics Data System (ADS)

    Gobbo, Gianpaolo; Leimkuhler, Benedict J.

    2015-06-01

    We introduce an enhanced sampling simulation technique based on continuous tempering, i.e., on continuously varying the temperature of the system under investigation. Our approach is mathematically straightforward, being based on an extended Hamiltonian formulation in which an auxiliary degree of freedom, determining the effective temperature, is coupled to the physical system. The physical system and its temperature evolve continuously in time according to the equations of motion derived from the extended Hamiltonian. Due to the Hamiltonian structure, it is easy to show that a particular subset of the configurations of the extended system is distributed according to the canonical ensemble for the physical system at the correct physical temperature.

  2. Molecular dynamics study of nanodroplet diffusion on smooth solid surfaces

    NASA Astrophysics Data System (ADS)

    Niu, Zhao-Xia; Huang, Tao; Chen, Yong

    2018-10-01

    We perform molecular dynamics simulations of Lennard-Jones particles in a canonical ensemble to study the diffusion of nanodroplets on smooth solid surfaces. Using the droplet-surface interaction to realize a hydrophilic or hydrophobic surface and calculating the mean square displacement of the center-of-mass of the nanodroplets, the random motion of nanodroplets could be characterized by shorttime subdiffusion, intermediate-time superdiffusion, and long-time normal diffusion. The short-time subdiffusive exponent increases and almost reaches unity (normal diffusion) with decreasing droplet size or enhancing hydrophobicity. The diffusion coefficient of the droplet on hydrophobic surfaces is larger than that on hydrophilic surfaces.

  3. Microscopic nonlinear relativistic quantum theory of absorption of powerful x-ray radiation in plasma.

    PubMed

    Avetissian, H K; Ghazaryan, A G; Matevosyan, H H; Mkrtchian, G F

    2015-10-01

    The microscopic quantum theory of plasma nonlinear interaction with the coherent shortwave electromagnetic radiation of arbitrary intensity is developed. The Liouville-von Neumann equation for the density matrix is solved analytically considering a wave field exactly and a scattering potential of plasma ions as a perturbation. With the help of this solution we calculate the nonlinear inverse-bremsstrahlung absorption rate for a grand canonical ensemble of electrons. The latter is studied in Maxwellian, as well as in degenerate quantum plasma for x-ray lasers at superhigh intensities and it is shown that one can achieve the efficient absorption coefficient in these cases.

  4. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  5. Argumentation Based Joint Learning: A Novel Ensemble Learning Approach

    PubMed Central

    Xu, Junyi; Yao, Li; Li, Le

    2015-01-01

    Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359

  6. Ocean Predictability and Uncertainty Forecasts Using Local Ensemble Transfer Kalman Filter (LETKF)

    NASA Astrophysics Data System (ADS)

    Wei, M.; Hogan, P. J.; Rowley, C. D.; Smedstad, O. M.; Wallcraft, A. J.; Penny, S. G.

    2017-12-01

    Ocean predictability and uncertainty are studied with an ensemble system that has been developed based on the US Navy's operational HYCOM using the Local Ensemble Transfer Kalman Filter (LETKF) technology. One of the advantages of this method is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates operational observations using ensemble method. The background covariance during this assimilation process is implicitly supplied with the ensemble avoiding the difficult task of developing tangent linear and adjoint models out of HYCOM with the complicated hybrid isopycnal vertical coordinate for 4D-VAR. The flow-dependent background covariance from the ensemble will be an indispensable part in the next generation hybrid 4D-Var/ensemble data assimilation system. The predictability and uncertainty for the ocean forecasts are studied initially for the Gulf of Mexico. The results are compared with another ensemble system using Ensemble Transfer (ET) method which has been used in the Navy's operational center. The advantages and disadvantages are discussed.

  7. Application of a hierarchical enzyme classification method reveals the role of gut microbiome in human metabolism

    PubMed Central

    2015-01-01

    Background Enzymes are known as the molecular machines that drive the metabolism of an organism; hence identification of the full enzyme complement of an organism is essential to build the metabolic blueprint of that species as well as to understand the interplay of multiple species in an ecosystem. Experimental characterization of the enzymatic reactions of all enzymes in a genome is a tedious and expensive task. The problem is more pronounced in the metagenomic samples where even the species are not adequately cultured or characterized. Enzymes encoded by the gut microbiota play an essential role in the host metabolism; thus, warranting the need to accurately identify and annotate the full enzyme complements of species in the genomic and metagenomic projects. To fulfill this need, we develop and apply a method called ECemble, an ensemble approach to identify enzymes and enzyme classes and study the human gut metabolic pathways. Results ECemble method uses an ensemble of machine-learning methods to accurately model and predict enzymes from protein sequences and also identifies the enzyme classes and subclasses at the finest resolution. A tenfold cross-validation result shows accuracy between 97 and 99% at different levels in the hierarchy of enzyme classification, which is superior to comparable methods. We applied ECemble to predict the entire complements of enzymes from ten sequenced proteomes including the human proteome. We also applied this method to predict enzymes encoded by the human gut microbiome from gut metagenomic samples, and to study the role played by the microbe-derived enzymes in the human metabolism. After mapping the known and predicted enzymes to canonical human pathways, we identified 48 pathways that have at least one bacteria-encoded enzyme, which demonstrates the complementary role of gut microbiome in human gut metabolism. These pathways are primarily involved in metabolizing dietary nutrients such as carbohydrates, amino acids, lipids, cofactors and vitamins. Conclusions The ECemble method is able to hierarchically assign high quality enzyme annotations to genomic and metagenomic data. This study demonstrated the real application of ECemble to understand the indispensable role played by microbe-encoded enzymes in the healthy functioning of human metabolic systems. PMID:26099921

  8. Application of a hierarchical enzyme classification method reveals the role of gut microbiome in human metabolism.

    PubMed

    Mohammed, Akram; Guda, Chittibabu

    2015-01-01

    Enzymes are known as the molecular machines that drive the metabolism of an organism; hence identification of the full enzyme complement of an organism is essential to build the metabolic blueprint of that species as well as to understand the interplay of multiple species in an ecosystem. Experimental characterization of the enzymatic reactions of all enzymes in a genome is a tedious and expensive task. The problem is more pronounced in the metagenomic samples where even the species are not adequately cultured or characterized. Enzymes encoded by the gut microbiota play an essential role in the host metabolism; thus, warranting the need to accurately identify and annotate the full enzyme complements of species in the genomic and metagenomic projects. To fulfill this need, we develop and apply a method called ECemble, an ensemble approach to identify enzymes and enzyme classes and study the human gut metabolic pathways. ECemble method uses an ensemble of machine-learning methods to accurately model and predict enzymes from protein sequences and also identifies the enzyme classes and subclasses at the finest resolution. A tenfold cross-validation result shows accuracy between 97 and 99% at different levels in the hierarchy of enzyme classification, which is superior to comparable methods. We applied ECemble to predict the entire complements of enzymes from ten sequenced proteomes including the human proteome. We also applied this method to predict enzymes encoded by the human gut microbiome from gut metagenomic samples, and to study the role played by the microbe-derived enzymes in the human metabolism. After mapping the known and predicted enzymes to canonical human pathways, we identified 48 pathways that have at least one bacteria-encoded enzyme, which demonstrates the complementary role of gut microbiome in human gut metabolism. These pathways are primarily involved in metabolizing dietary nutrients such as carbohydrates, amino acids, lipids, cofactors and vitamins. The ECemble method is able to hierarchically assign high quality enzyme annotations to genomic and metagenomic data. This study demonstrated the real application of ECemble to understand the indispensable role played by microbe-encoded enzymes in the healthy functioning of human metabolic systems.

  9. Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA

    NASA Technical Reports Server (NTRS)

    Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.

    2017-01-01

    Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less

  11. Application-Level Interoperability Across Grids and Clouds

    NASA Astrophysics Data System (ADS)

    Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh

    Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.

  12. Edwards statistical mechanics for jammed granular matter

    NASA Astrophysics Data System (ADS)

    Baule, Adrian; Morone, Flaviano; Herrmann, Hans J.; Makse, Hernán A.

    2018-01-01

    In 1989, Sir Sam Edwards made the visionary proposition to treat jammed granular materials using a volume ensemble of equiprobable jammed states in analogy to thermal equilibrium statistical mechanics, despite their inherent athermal features. Since then, the statistical mechanics approach for jammed matter—one of the very few generalizations of Gibbs-Boltzmann statistical mechanics to out-of-equilibrium matter—has garnered an extraordinary amount of attention by both theorists and experimentalists. Its importance stems from the fact that jammed states of matter are ubiquitous in nature appearing in a broad range of granular and soft materials such as colloids, emulsions, glasses, and biomatter. Indeed, despite being one of the simplest states of matter—primarily governed by the steric interactions between the constitutive particles—a theoretical understanding based on first principles has proved exceedingly challenging. Here a systematic approach to jammed matter based on the Edwards statistical mechanical ensemble is reviewed. The construction of microcanonical and canonical ensembles based on the volume function, which replaces the Hamiltonian in jammed systems, is discussed. The importance of approximation schemes at various levels is emphasized leading to quantitative predictions for ensemble averaged quantities such as packing fractions and contact force distributions. An overview of the phenomenology of jammed states and experiments, simulations, and theoretical models scrutinizing the strong assumptions underlying Edwards approach is given including recent results suggesting the validity of Edwards ergodic hypothesis for jammed states. A theoretical framework for packings whose constitutive particles range from spherical to nonspherical shapes such as dimers, polymers, ellipsoids, spherocylinders or tetrahedra, hard and soft, frictional, frictionless and adhesive, monodisperse, and polydisperse particles in any dimensions is discussed providing insight into a unifying phase diagram for all jammed matter. Furthermore, the connection between the Edwards ensemble of metastable jammed states and metastability in spin glasses is established. This highlights the fact that the packing problem can be understood as a constraint satisfaction problem for excluded volume and force and torque balance leading to a unifying framework between the Edwards ensemble of equiprobable jammed states and out-of-equilibrium spin glasses.

  13. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  14. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  15. An efficient ensemble learning method for gene microarray classification.

    PubMed

    Osareh, Alireza; Shadgar, Bita

    2013-01-01

    The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  16. EFS: an ensemble feature selection tool implemented as R-package and web-application.

    PubMed

    Neumann, Ursula; Genze, Nikita; Heider, Dominik

    2017-01-01

    Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.

  17. Hamiltonian thermodynamics of three-dimensional dilatonic black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Goncalo A. S.; Lemos, Jose P. S.

    2008-08-15

    The action for a class of three-dimensional dilaton-gravity theories with a negative cosmological constant can be recast in a Brans-Dicke type action, with its free {omega} parameter. These theories have static spherically symmetric black holes. Those with well formulated asymptotics are studied through a Hamiltonian formalism, and their thermodynamical properties are found out. The theories studied are general relativity ({omega}{yields}{infinity}), a dimensionally reduced cylindrical four-dimensional general relativity theory ({omega}=0), and a theory representing a class of theories ({omega}=-3). The Hamiltonian formalism is set up in three dimensions through foliations on the right region of the Carter-Penrose diagram, with the bifurcationmore » 1-sphere as the left boundary, and anti-de Sitter infinity as the right boundary. The metric functions on the foliated hypersurfaces are the canonical coordinates. The Hamiltonian action is written, the Hamiltonian being a sum of constraints. One finds a new action which yields an unconstrained theory with one pair of canonical coordinates (M,P{sub M}), M being the mass parameter and P{sub M} its conjugate momenta The resulting Hamiltonian is a sum of boundary terms only. A quantization of the theory is performed. The Schroedinger evolution operator is constructed, the trace is taken, and the partition function of the canonical ensemble is obtained. The black hole entropies differ, in general, from the usual quarter of the horizon area due to the dilaton.« less

  18. A New Method for Determining Structure Ensemble: Application to a RNA Binding Di-Domain Protein.

    PubMed

    Liu, Wei; Zhang, Jingfeng; Fan, Jing-Song; Tria, Giancarlo; Grüber, Gerhard; Yang, Daiwen

    2016-05-10

    Structure ensemble determination is the basis of understanding the structure-function relationship of a multidomain protein with weak domain-domain interactions. Paramagnetic relaxation enhancement has been proven a powerful tool in the study of structure ensembles, but there exist a number of challenges such as spin-label flexibility, domain dynamics, and overfitting. Here we propose a new (to our knowledge) method to describe structure ensembles using a minimal number of conformers. In this method, individual domains are considered rigid; the position of each spin-label conformer and the structure of each protein conformer are defined by three and six orthogonal parameters, respectively. First, the spin-label ensemble is determined by optimizing the positions and populations of spin-label conformers against intradomain paramagnetic relaxation enhancements with a genetic algorithm. Subsequently, the protein structure ensemble is optimized using a more efficient genetic algorithm-based approach and an overfitting indicator, both of which were established in this work. The method was validated using a reference ensemble with a set of conformers whose populations and structures are known. This method was also applied to study the structure ensemble of the tandem di-domain of a poly (U) binding protein. The determined ensemble was supported by small-angle x-ray scattering and nuclear magnetic resonance relaxation data. The ensemble obtained suggests an induced fit mechanism for recognition of target RNA by the protein. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  19. A New Approach to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  20. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  1. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    USGS Publications Warehouse

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  2. Free energy calculations along entropic pathways. I. Homogeneous vapor-liquid nucleation for atomic and molecular systems

    NASA Astrophysics Data System (ADS)

    Desgranges, Caroline; Delhommelle, Jerome

    2016-11-01

    Using the entropy S as a reaction coordinate, we determine the free energy barrier associated with the formation of a liquid droplet from a supersaturated vapor for atomic and molecular fluids. For this purpose, we develop the μ V T -S simulation method that combines the advantages of the grand-canonical ensemble, that allows for a direct evaluation of the entropy, and of the umbrella sampling method, that is well suited to the study of an activated process like nucleation. Applying this approach to an atomic system such as Ar allows us to test the method. The results show that the μ V T -S method gives the correct dependence on supersaturation of the height of the free energy barrier and of the size of the critical droplet, when compared to predictions from the classical nucleation theory and to previous simulation results. In addition, it provides insight into the relation between the entropy and droplet formation throughout this process. An additional advantage of the μ V T -S approach is its direct transferability to molecular systems, since it uses the entropy of the system as the reaction coordinate. Applications of the μ V T -S simulation method to N2 and CO2 are presented and discussed in this work, showing the versatility of the μ V T -S approach.

  3. Exploring the calibration of a wind forecast ensemble for energy applications

    NASA Astrophysics Data System (ADS)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.

  4. Simulation of weak polyelectrolytes: a comparison between the constant pH and the reaction ensemble method

    NASA Astrophysics Data System (ADS)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-03-01

    The reaction ensemble and the constant pH method are well-known chemical equilibrium approaches to simulate protonation and deprotonation reactions in classical molecular dynamics and Monte Carlo simulations. In this article, we demonstrate the similarity between both methods under certain conditions. We perform molecular dynamics simulations of a weak polyelectrolyte in order to compare the titration curves obtained by both approaches. Our findings reveal a good agreement between the methods when the reaction ensemble is used to sweep the reaction constant. Pronounced differences between the reaction ensemble and the constant pH method can be observed for stronger acids and bases in terms of adaptive pH values. These deviations are due to the presence of explicit protons in the reaction ensemble method which induce a screening of electrostatic interactions between the charged titrable groups of the polyelectrolyte. The outcomes of our simulation hint to a better applicability of the reaction ensemble method for systems in confined geometries and titrable groups in polyelectrolytes with different pKa values.

  5. Ensemble Data Mining Methods

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  6. Ensemble gene function prediction database reveals genes important for complex I formation in Arabidopsis thaliana.

    PubMed

    Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek

    2018-03-01

    Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  7. Computer simulation of liquid-vapor coexistence of confined quantum fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trejos, Víctor M.; Gil-Villegas, Alejandro, E-mail: gil@fisica.ugto.mx; Martinez, Alejandro

    2013-11-14

    The liquid-vapor coexistence (LV) of bulk and confined quantum fluids has been studied by Monte Carlo computer simulation for particles interacting via a semiclassical effective pair potential V{sub eff}(r) = V{sub LJ} + V{sub Q}, where V{sub LJ} is the Lennard-Jones 12-6 potential (LJ) and V{sub Q} is the first-order Wigner-Kirkwood (WK-1) quantum potential, that depends on β = 1/kT and de Boer's quantumness parameter Λ=h/σ√(mε), where k and h are the Boltzmann's and Planck's constants, respectively, m is the particle's mass, T is the temperature of the system, and σ and ε are the LJ potential parameters. The non-conformalmore » properties of the system of particles interacting via the effective pair potential V{sub eff}(r) are due to Λ, since the LV phase diagram is modified by varying Λ. We found that the WK-1 system gives an accurate description of the LV coexistence for bulk phases of several quantum fluids, obtained by the Gibbs Ensemble Monte Carlo method (GEMC). Confinement effects were introduced using the Canonical Ensemble (NVT) to simulate quantum fluids contained within parallel hard walls separated by a distance L{sub p}, within the range 2σ ⩽ L{sub p} ⩽ 6σ. The critical temperature of the system is reduced by decreasing L{sub p} and increasing Λ, and the liquid-vapor transition is not longer observed for L{sub p}/σ < 2, in contrast to what has been observed for the classical system.« less

  8. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  9. Absence of Quantum Time Crystals.

    PubMed

    Watanabe, Haruki; Oshikawa, Masaki

    2015-06-26

    In analogy with crystalline solids around us, Wilczek recently proposed the idea of "time crystals" as phases that spontaneously break the continuous time translation into a discrete subgroup. The proposal stimulated further studies and vigorous debates whether it can be realized in a physical system. However, a precise definition of the time crystal is needed to resolve the issue. Here we first present a definition of time crystals based on the time-dependent correlation functions of the order parameter. We then prove a no-go theorem that rules out the possibility of time crystals defined as such, in the ground state or in the canonical ensemble of a general Hamiltonian, which consists of not-too-long-range interactions.

  10. A Novel Data-Driven Learning Method for Radar Target Detection in Nonstationary Environments

    DTIC Science & Technology

    2016-05-01

    Classifier ensembles for changing environments,” in Multiple Classifier Systems, vol. 3077, F. Roli, J. Kittler and T. Windeatt, Eds. New York, NY...Dec. 2006, pp. 1113–1118. [21] J. Z. Kolter and M. A. Maloof, “Dynamic weighted majority: An ensemble method for drifting concepts,” J. Mach. Learn...Trans. Neural Netw., vol. 22, no. 10, pp. 1517–1531, Oct. 2011. [23] R. Polikar, “ Ensemble learning,” in Ensemble Machine Learning: Methods and

  11. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  12. Comparing Planning Hydrologic Ensembles associated with Paleoclimate, Projected Climate, and blended Climate Information Sets

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Prairie, J.; Pruitt, T.; Rajagopalan, B.; Woodhouse, C.

    2008-12-01

    Water resources adaptation planning under climate change involves making assumptions about probabilistic water supply conditions, which are linked to a given climate context (e.g., instrument records, paleoclimate indicators, projected climate data, or blend of these). Methods have been demonstrated to associate water supply assumptions with any of these climate information types. Additionally, demonstrations have been offered that represent these information types in a scenario-rich (ensemble) planning framework, either via ensembles (e.g., survey of many climate projections) or stochastic modeling (e.g., based on instrument records or paleoclimate indicators). If the planning goal involves using a hydrologic ensemble that jointly reflects paleoclimate (e.g., lower- frequency variations) and projected climate information (e.g., monthly to annual trends), methods are required to guide how these information types might be translated into water supply assumptions. However, even if such a method exists, there is lack of understanding on how such a hydrologic ensemble might differ from ensembles developed relative to paleoclimate or projected climate information alone. This research explores two questions: (1) how might paleoclimate and projected climate information be blended into an planning hydrologic ensemble, and (2) how does a planning hydrologic ensemble differ when associated with the individual climate information types (i.e. instrumental records, paleoclimate, projected climate, or blend of the latter two). Case study basins include the Gunnison River Basin in Colorado and the Missouri River Basin above Toston in Montana. Presentation will highlight ensemble development methods by information type, and comparison of ensemble results.

  13. Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images

    PubMed Central

    Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049

  14. Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.

    PubMed

    Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.

  15. Thermodynamics of charged dilatonic BTZ black holes in rainbow gravity

    NASA Astrophysics Data System (ADS)

    Dehghani, M.

    2018-02-01

    In this paper, the charged three-dimensional Einstein's theory coupled to a dilatonic field has been considered in the rainbow gravity. The dilatonic potential has been written as the linear combination of two Liouville-type potentials. Four new classes of charged dilatonic rainbow black hole solutions, as the exact solution to the coupled field equations of the energy dependent space time, have been obtained. Two of them are correspond to the Coulomb's electric field and the others are consequences of a modified Coulomb's law. Total charge and mass as well as the entropy, temperature and electric potential of the new charged black holes have been calculated in the presence of rainbow functions. Although the thermodynamic quantities are affected by the rainbow functions, it has been found that the first law of black hole thermodynamics is still valid for all of the new black hole solutions. At the final stage, making use of the canonical ensemble method and regarding the black hole heat capacity, the thermal stability or phase transition of the new rainbow black hole solutions have been analyzed.

  16. Monte Carlo simulations of dipolar and quadrupolar linear Kihara fluids. A test of thermodynamic perturbation theory

    NASA Astrophysics Data System (ADS)

    Garzon, B.

    Several simulations of dipolar and quadrupolar linear Kihara fluids using the Monte Carlo method in the canonical ensemble have been performed. Pressure and internal energy have been directly determined from simulations and Helmholtz free energy using thermodynamic integration. Simulations were carried out for fluids of fixed elongation at two different densities and several values of temperature and dipolar or quadrupolar moment for each density. Results are compared with the perturbation theory developed by Boublik for this same type of fluid and good agreement between simulated and theoretical values was obtained especially for quadrupole fluids. Simulations are also used to obtain the liquid structure giving the first few coefficients of the expansion of pair correlation functions in terms of spherical harmonics. Estimations of the triple point temperature to critical temperature ratio are given for some dipole and quadrupole linear fluids. The stability range of the liquid phase of these substances is shortly discussed and an analysis about the opposite roles of the dipole moment and the molecular elongation on this stability is also given.

  17. Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.

    PubMed

    Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza

    2017-12-01

    Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.

  18. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  19. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For peak values taking flood forecasts from each individual member into account is more appropriate.

  20. Similarity Measures for Protein Ensembles

    PubMed Central

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244

  1. A Statistical Description of Neural Ensemble Dynamics

    PubMed Central

    Long, John D.; Carmena, Jose M.

    2011-01-01

    The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486

  2. The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification

    PubMed Central

    Wang, Xueyi; Davidson, Nicholas J.

    2011-01-01

    Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162

  3. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    NASA Astrophysics Data System (ADS)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  5. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence

    PubMed Central

    Kelly, David; Majda, Andrew J.; Tong, Xin T.

    2015-01-01

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335

  6. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    PubMed

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  7. Discriminant analysis in wildlife research: Theory and applications

    USGS Publications Warehouse

    Williams, B.K.; Capen, D.E.

    1981-01-01

    Discriminant analysis, a method of analyzing grouped multivariate data, is often used in ecological investigations. It has both a predictive and an explanatory function, the former aiming at classification of individuals of unknown group membership. The goal of the latter function is to exhibit group separation by means of linear transforms, and the corresponding method is called canonical analysis. This discussion focuses on the application of canonical analysis in ecology. In order to clarify its meaning, a parametric approach is taken instead of the usual data-based formulation. For certain assumptions the data-based canonical variates are shown to result from maximum likelihood estimation, thus insuring consistency and asymptotic efficiency. The distorting effects of covariance heterogeneity are examined, as are certain difficulties which arise in interpreting the canonical functions. A 'distortion metric' is defined, by means of which distortions resulting from the canonical transformation can be assessed. Several sampling problems which arise in ecological applications are considered. It is concluded that the method may prove valuable for data exploration, but is of limited value as an inferential procedure.

  8. Confidence-based ensemble for GBM brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew

    2011-03-01

    It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.

  9. NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.

    PubMed

    Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan

    2014-01-01

    One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.

  10. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    NASA Astrophysics Data System (ADS)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  11. Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles

    PubMed Central

    Treue, Stefan

    2018-01-01

    Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798

  12. Comparison of initial perturbation methods for the mesoscale ensemble prediction system of the Meteorological Research Institute for the WWRP Beijing 2008 Olympics Research and Development Project (B08RDP)

    NASA Astrophysics Data System (ADS)

    Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko

    2011-05-01

    Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The performance of the MBD method was good for ensemble prediction of intense rain with a relatively small computing cost. The LET method showed similar characteristics to the MBD method, but the spread and growth rate were slightly smaller and the relative operating characteristic area skill score and BSS did not surpass those of MBD. These characteristic features of the five methods were confirmed by checking the evolution of the total energy norms and their growth rates. Characteristics of the initial perturbations obtained by four methods (GSV, MSV, MBD and LET) were examined for the case of a synoptic low-pressure system passing over eastern China. With GSV and MSV, the regions of large spread were near the low-pressure system, but with MSV, the distribution was more concentrated on the mesoscale disturbance. On the other hand, large-spread areas were observed southwest of the disturbance in MBD and LET. The horizontal pattern of LET perturbation was similar to that of MBD, but the amplitude of the LET perturbation reflected the observation density.

  13. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  14. Multiple-instance ensemble learning for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Ergul, Ugur; Bilgin, Gokhan

    2017-10-01

    An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.

  15. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  16. Hubbard pair cluster in the external fields. Studies of the magnetic properties

    NASA Astrophysics Data System (ADS)

    Balcerzak, T.; Szałowski, K.

    2018-06-01

    The magnetic properties of the two-site Hubbard cluster (dimer or pair), embedded in the external electric and magnetic fields and treated as the open system, are studied by means of the exact diagonalization of the Hamiltonian. The formalism of the grand canonical ensemble is adopted. The phase diagrams, on-site magnetizations, spin-spin correlations, mean occupation numbers and hopping energy are investigated and illustrated in figures. An influence of temperature, mean electron concentration, Coulomb U parameter and external fields on the quantities of interest is presented and discussed. In particular, the anomalous behaviour of the magnetization and correlation function vs. temperature near the critical magnetic field is found. Also, the effect of magnetization switching by the external fields is demonstrated.

  17. Excellence of numerical differentiation method in calculating the coefficients of high temperature series expansion of the free energy and convergence problem of the expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S., E-mail: chixiayzsq@yahoo.com; Solana, J. R.

    2014-12-28

    In this paper, it is shown that the numerical differentiation method in performing the coupling parameter series expansion [S. Zhou, J. Chem. Phys. 125, 144518 (2006); AIP Adv. 1, 040703 (2011)] excels at calculating the coefficients a{sub i} of hard sphere high temperature series expansion (HS-HTSE) of the free energy. Both canonical ensemble and isothermal-isobaric ensemble Monte Carlo simulations for fluid interacting through a hard sphere attractive Yukawa (HSAY) potential with extremely short ranges and at very low temperatures are performed, and the resulting two sets of data of thermodynamic properties are in excellent agreement with each other, and wellmore » qualified to be used for assessing convergence of the HS-HTSE for the HSAY fluid. Results of valuation are that (i) by referring to the results of a hard sphere square well fluid [S. Zhou, J. Chem. Phys. 139, 124111 (2013)], it is found that existence of partial sum limit of the high temperature series expansion series and consistency between the limit value and the true solution depend on both the potential shapes and temperatures considered. (ii) For the extremely short range HSAY potential, the HS-HTSE coefficients a{sub i} falls rapidly with the order i, and the HS-HTSE converges from fourth order; however, it does not converge exactly to the true solution at reduced temperatures lower than 0.5, wherein difference between the partial sum limit of the HS-HTSE series and the simulation result tends to become more evident. Something worth mentioning is that before the convergence order is reached, the preceding truncation is always improved by the succeeding one, and the fourth- and higher-order truncations give the most dependable and qualitatively always correct thermodynamic results for the HSAY fluid even at low reduced temperatures to 0.25.« less

  18. 75 FR 27736 - Availability for Non-Exclusive, Exclusive, or Partially Exclusive Licensing of U.S. Provisional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... Cooling Method for Protective Clothing Ensembles AGENCY: Department of the Army, DoD. ACTION: Notice... Protective Clothing Ensembles,'' filed March 30, 2010. The United States Government, as represented by the... to a two- stage evaporative cooling method for use in protective clothing ensembles. Brenda S. Bowen...

  19. Optimizing inhomogeneous spin ensembles for quantum memory

    NASA Astrophysics Data System (ADS)

    Bensky, Guy; Petrosyan, David; Majer, Johannes; Schmiedmayer, Jörg; Kurizki, Gershon

    2012-07-01

    We propose a method to maximize the fidelity of quantum memory implemented by a spectrally inhomogeneous spin ensemble. The method is based on preselecting the optimal spectral portion of the ensemble by judiciously designed pulses. This leads to significant improvement of the transfer and storage of quantum information encoded in the microwave or optical field.

  20. Ab initio calculation of proton-coupled electron transfer rates using the external-potential representation: A ubiquinol complex in solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Takeshi; Kato, Shigeki

    2007-06-14

    In quantum-mechanical/molecular-mechanical (QM/MM) treatment of chemical reactions in condensed phases, one solves the electronic Schroedinger equation for the solute (or an active site) under the electrostatic field from the environment. This Schroedinger equation depends parametrically on the solute nuclear coordinates R and the external electrostatic potential V. This fact suggests that one may use R and V as natural collective coordinates for describing the entire system, where V plays the role of collective solvent variables. In this paper such an (R,V) representation of the QM/MM canonical ensemble is described, with particular focus on how to treat charge transfer processes inmore » this representation. As an example, the above method is applied to the proton-coupled electron transfer of a ubiquinol analog with phenoxyl radical in acetonitrile solvent. Ab initio free-energy surfaces are calculated as functions of R and V using the reference interaction site model self-consistent field method, the equilibrium points and the minimum free-energy crossing point are located in the (R,V) space, and then the kinetic isotope effects (KIEs) are evaluated approximately. The results suggest that a stiffer proton potential at the transition state may be responsible for unusual KIEs observed experimentally for related systems.« less

  1. The Wang-Landau Sampling Algorithm

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2003-03-01

    Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  2. myPresto/omegagene: a GPU-accelerated molecular dynamics simulator tailored for enhanced conformational sampling methods with a non-Ewald electrostatic scheme.

    PubMed

    Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki

    2016-01-01

    Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g. , the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.

  3. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  4. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    NASA Technical Reports Server (NTRS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  5. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    NASA Astrophysics Data System (ADS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  6. Ensemble Generation and the Influence of Protein Flexibility on Geometric Tunnel Prediction in Cytochrome P450 Enzymes

    PubMed Central

    Kingsley, Laura J.; Lill, Markus A.

    2014-01-01

    Computational prediction of ligand entry and egress paths in proteins has become an emerging topic in computational biology and has proven useful in fields such as protein engineering and drug design. Geometric tunnel prediction programs, such as Caver3.0 and MolAxis, are computationally efficient methods to identify potential ligand entry and egress routes in proteins. Although many geometric tunnel programs are designed to accommodate a single input structure, the increasingly recognized importance of protein flexibility in tunnel formation and behavior has led to the more widespread use of protein ensembles in tunnel prediction. However, there has not yet been an attempt to directly investigate the influence of ensemble size and composition on geometric tunnel prediction. In this study, we compared tunnels found in a single crystal structure to ensembles of various sizes generated using different methods on both the apo and holo forms of cytochrome P450 enzymes CYP119, CYP2C9, and CYP3A4. Several protein structure clustering methods were tested in an attempt to generate smaller ensembles that were capable of reproducing the data from larger ensembles. Ultimately, we found that by including members from both the apo and holo data sets, we could produce ensembles containing less than 15 members that were comparable to apo or holo ensembles containing over 100 members. Furthermore, we found that, in the absence of either apo or holo crystal structure data, pseudo-apo or –holo ensembles (e.g. adding ligand to apo protein throughout MD simulations) could be used to resemble the structural ensembles of the corresponding apo and holo ensembles, respectively. Our findings not only further highlight the importance of including protein flexibility in geometric tunnel prediction, but also suggest that smaller ensembles can be as capable as larger ensembles at capturing many of the protein motions important for tunnel prediction at a lower computational cost. PMID:24956479

  7. The total probabilities from high-resolution ensemble forecasting of floods

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2015-04-01

    Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.

  8. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  9. Different realizations of Cooper-Frye sampling with conservation laws

    NASA Astrophysics Data System (ADS)

    Schwarz, C.; Oliinychenko, D.; Pang, L.-G.; Ryu, S.; Petersen, H.

    2018-01-01

    Approaches based on viscous hydrodynamics for the hot and dense stage and hadronic transport for the final dilute rescattering stage are successfully applied to the dynamic description of heavy ion reactions at high beam energies. One crucial step in such hybrid approaches is the so-called particlization, which is the transition between the hydrodynamic description and the microscopic degrees of freedom. For this purpose, individual particles are sampled on the Cooper-Frye hypersurface. In this work, four different realizations of the sampling algorithms are compared, with three of them incorporating the global conservation laws of quantum numbers in each event. The algorithms are compared within two types of scenarios: a simple ‘box’ hypersurface consisting of only one static cell and a typical particlization hypersurface for Au+Au collisions at \\sqrt{{s}{NN}}=200 {GeV}. For all algorithms the mean multiplicities (or particle spectra) remain unaffected by global conservation laws in the case of large volumes. In contrast, the fluctuations of the particle numbers are affected considerably. The fluctuations of the newly developed SPREW algorithm based on the exponential weight, and the recently suggested SER algorithm based on ensemble rejection, are smaller than those without conservation laws and agree with the expectation from the canonical ensemble. The previously applied mode sampling algorithm produces dramatically larger fluctuations than expected in the corresponding microcanonical ensemble, and therefore should be avoided in fluctuation studies. This study might be of interest for the investigation of particle fluctuations and correlations, e.g. the suggested signatures for a phase transition or a critical endpoint, in hybrid approaches that are affected by global conservation laws.

  10. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  11. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  12. On the incidence of meteorological and hydrological processors: Effect of resolution, sharpness and reliability of hydrological ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc

    2017-12-01

    Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.

  13. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  14. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  15. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data.

    PubMed

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela; Trabucchi, Michele

    2017-05-19

    Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data

    PubMed Central

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela

    2017-01-01

    Abstract Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. PMID:28108660

  17. Impacts of calibration strategies and ensemble methods on ensemble flood forecasting over Lanjiang basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Xu, Yue-Ping

    2017-04-01

    Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.

  18. Constructing better classifier ensemble based on weighted accuracy and diversity measure.

    PubMed

    Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  19. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    PubMed Central

    Chao, Lidia S.

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402

  20. Late onset canonical babbling: a possible early marker of abnormal development.

    PubMed

    Oller, D K; Eilers, R E; Neal, A R; Cobo-Lewis, A B

    1998-11-01

    By their 10th month of life, typically developing infants produce canonical babbling, which includes the well-formed syllables required for meaningful speech. Research suggests that emerging speech or language-related disorders might be associated with late onset of canonical babbling. Onset of canonical babbling was investigated for 1,536 high-risk infants, at about 10-months corrected age. Parental report by open-ended questionnaire was found to be an efficient method for ascertaining babbling status. Although delays were infrequent, they were often associated with genetic, neurological, anatomical, and/or physiological abnormalities. Over half the cases of late canonical babbling were not, at the time they were discovered associated with prior significant medical diagnoses. Late canonical-babbling onset may be a predictor of later developmental disabilities, including problems in speech, language, and reading.

  1. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  2. CarcinoPred-EL: Novel models for predicting the carcinogenicity of chemicals using molecular fingerprints and ensemble learning methods.

    PubMed

    Zhang, Li; Ai, Haixin; Chen, Wen; Yin, Zimo; Hu, Huan; Zhu, Junfeng; Zhao, Jian; Zhao, Qi; Liu, Hongsheng

    2017-05-18

    Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).

  3. Image Change Detection via Ensemble Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work,more » we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a mixture of experts in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.« less

  4. Myotonic Dystrophy Type 1 RNA Crystal Structures Reveal Heterogeneous 1×1 Nucleotide UU Internal Loop Conformations⊥

    PubMed Central

    Kumar, Amit; Park, HaJeung; Fang, Pengfei; Parkesh, Raman; Guo, Min; Nettles, Kendall W.; Disney, Matthew D.

    2011-01-01

    RNA internal loops often display a variety of conformations in solution. Herein, we visualize conformational heterogeneity in the context of the 5′CUG/3′GUC repeat motif present in the RNA that causes myotonic dystrophy type 1 (DM1). Specifically, two crystal structures are disclosed of a model DM1 triplet repeating construct, 5′r(UUGGGC(CUG)3GUCC)2, refined to 2.20 Å and 1.52 Å resolution. Here, differences in orientation of the 5′ dangling UU end between the two structures induce changes in the backbone groove width, which reveals that non-canonical 1×1 nucleotide UU internal loops can display an ensemble of pairing conformations. In the 2.20 Å structure, CUGa, the 5′UU forms one hydrogen-bonded pairs with a 5′UU of a neighboring helix in the unit cell to form a pseudo-infinite helix. The central 1×1 nucleotide UU internal loop has no hydrogen bonds, while the terminal 1×1 nucleotide UU internal loops each form a one hydrogen-bonded pair. In the 1.52 Å structure, CUGb, the 5′ UU dangling end is tucked into the major groove of the duplex. While the canonical paired bases show no change in base pairing, in CUGb the terminal 1×1 nucleotide UU internal loops form now two hydrogen-bonded pairs. Thus, the shift in major groove induced by the 5′UU dangling end alters non-canonical base patterns. Collectively, these structures indicate that 1×1 nucleotide UU internal loops in DM1 may sample multiple conformations in vivo. This observation has implications for the recognition of this RNA, and other repeating transcripts, by protein and small molecule ligands. PMID:21988728

  5. Using Grand Canonical Monte Carlo Simulations to Understand the Role of Interfacial Fluctuations on Solvation at the Water-Vapor Interface.

    PubMed

    Rane, Kaustubh; van der Vegt, Nico F A

    2016-09-15

    The present work investigates the effect of interfacial fluctuations (predominantly capillary wave-like fluctuations) on the solvation free energy (Δμ) of a monatomic solute at the water-vapor interface. We introduce a grand-canonical-ensemble-based simulation approach that quantifies the contribution of interfacial fluctuations to Δμ. This approach is used to understand how the above contribution depends on the strength of dispersive and electrostatic solute-water interactions at the temperature of 400 K. At this temperature, we observe that interfacial fluctuations do play a role in the variation of Δμ with the strength of the electrostatic solute-water interaction. We also use grand canonical simulations to further investigate how interfacial fluctuations affect the propensity of the solute toward the water-vapor interface. To this end, we track a quantity called the interface potential (surface excess free energy) with the number of water molecules. With increasing number of water molecules, the liquid-vapor interface moves across a solute, which is kept at a fixed position in the simulation. Hence, the dependence of the interface potential on the number of waters models the process of moving the solute through the water-vapor interface. We analyze the change of the interface potential with the number of water molecules to explain that solute-induced changes in the interfacial fluctuations, like the pinning of capillary-wave-like undulations, do not play any role in the propensity of solutes toward water-vapor interfaces. The above analysis also shows that the dampening of interfacial fluctuations accompanies the adsorption of any solute at the liquid-vapor interface, irrespective of the chemical nature of the solute and solvent. However, such a correlation does not imply that dampening of fluctuations causes adsorption.

  6. EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Qingya; Guo, Hanqi; Che, Limei

    We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less

  7. NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms

    PubMed Central

    Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan

    2014-01-01

    One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available. PMID:24667482

  8. Developing an approach to effectively use super ensemble experiments for the projection of hydrological extremes under climate change

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kim, H.; Utsumi, N.

    2017-12-01

    This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.

  9. Positive geometries and canonical forms

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Bai, Yuntao; Lam, Thomas

    2017-11-01

    Recent years have seen a surprising connection between the physics of scattering amplitudes and a class of mathematical objects — the positive Grassmannian, positive loop Grassmannians, tree and loop Amplituhedra — which have been loosely referred to as "positive geometries". The connection between the geometry and physics is provided by a unique differential form canonically determined by the property of having logarithmic singularities (only) on all the boundaries of the space, with residues on each boundary given by the canonical form on that boundary. The structures seen in the physical setting of the Amplituhedron are both rigid and rich enough to motivate an investigation of the notions of "positive geometries" and their associated "canonical forms" as objects of study in their own right, in a more general mathematical setting. In this paper we take the first steps in this direction. We begin by giving a precise definition of positive geometries and canonical forms, and introduce two general methods for finding forms for more complicated positive geometries from simpler ones — via "triangulation" on the one hand, and "push-forward" maps between geometries on the other. We present numerous examples of positive geometries in projective spaces, Grassmannians, and toric, cluster and flag varieties, both for the simplest "simplex-like" geometries and the richer "polytope-like" ones. We also illustrate a number of strategies for computing canonical forms for large classes of positive geometries, ranging from a direct determination exploiting knowledge of zeros and poles, to the use of the general triangulation and push-forward methods, to the representation of the form as volume integrals over dual geometries and contour integrals over auxiliary spaces. These methods yield interesting representations for the canonical forms of wide classes of positive geometries, ranging from the simplest Amplituhedra to new expressions for the volume of arbitrary convex polytopes.

  10. Impact of ensemble learning in the assessment of skeletal maturity.

    PubMed

    Cunha, Pedro; Moura, Daniel C; Guevara López, Miguel Angel; Guerra, Conceição; Pinto, Daniela; Ramos, Isabel

    2014-09-01

    The assessment of the bone age, or skeletal maturity, is an important task in pediatrics that measures the degree of maturation of children's bones. Nowadays, there is no standard clinical procedure for assessing bone age and the most widely used approaches are the Greulich and Pyle and the Tanner and Whitehouse methods. Computer methods have been proposed to automatize the process; however, there is a lack of exploration about how to combine the features of the different parts of the hand, and how to take advantage of ensemble techniques for this purpose. This paper presents a study where the use of ensemble techniques for improving bone age assessment is evaluated. A new computer method was developed that extracts descriptors for each joint of each finger, which are then combined using different ensemble schemes for obtaining a final bone age value. Three popular ensemble schemes are explored in this study: bagging, stacking and voting. Best results were achieved by bagging with a rule-based regression (M5P), scoring a mean absolute error of 10.16 months. Results show that ensemble techniques improve the prediction performance of most of the evaluated regression algorithms, always achieving best or comparable to best results. Therefore, the success of the ensemble methods allow us to conclude that their use may improve computer-based bone age assessment, offering a scalable option for utilizing multiple regions of interest and combining their output.

  11. On the use and computation of the Jordan canonical form in system theory

    NASA Technical Reports Server (NTRS)

    Sridhar, B.; Jordan, D.

    1974-01-01

    This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.

  12. Predicting drug side-effect profiles: a chemical fragment-based approach

    PubMed Central

    2011-01-01

    Background Drug side-effects, or adverse drug reactions, have become a major public health concern. It is one of the main causes of failure in the process of drug development, and of drug withdrawal once they have reached the market. Therefore, in silico prediction of potential side-effects early in the drug discovery process, before reaching the clinical stages, is of great interest to improve this long and expensive process and to provide new efficient and safe therapies for patients. Results In the present work, we propose a new method to predict potential side-effects of drug candidate molecules based on their chemical structures, applicable on large molecular databanks. A unique feature of the proposed method is its ability to extract correlated sets of chemical substructures (or chemical fragments) and side-effects. This is made possible using sparse canonical correlation analysis (SCCA). In the results, we show the usefulness of the proposed method by predicting 1385 side-effects in the SIDER database from the chemical structures of 888 approved drugs. These predictions are performed with simultaneous extraction of correlated ensembles formed by a set of chemical substructures shared by drugs that are likely to have a set of side-effects. We also conduct a comprehensive side-effect prediction for many uncharacterized drug molecules stored in DrugBank, and were able to confirm interesting predictions using independent source of information. Conclusions The proposed method is expected to be useful in various stages of the drug development process. PMID:21586169

  13. Insights into the deterministic skill of air quality ensembles ...

    EPA Pesticide Factsheets

    Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati

  14. Reliable probabilities through statistical post-processing of ensemble predictions

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2013-04-01

    We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.

  15. Lysine acetylation sites prediction using an ensemble of support vector machine classifiers.

    PubMed

    Xu, Yan; Wang, Xiao-Bo; Ding, Jun; Wu, Ling-Yun; Deng, Nai-Yang

    2010-05-07

    Lysine acetylation is an essentially reversible and high regulated post-translational modification which regulates diverse protein properties. Experimental identification of acetylation sites is laborious and expensive. Hence, there is significant interest in the development of computational methods for reliable prediction of acetylation sites from amino acid sequences. In this paper we use an ensemble of support vector machine classifiers to perform this work. The experimentally determined acetylation lysine sites are extracted from Swiss-Prot database and scientific literatures. Experiment results show that an ensemble of support vector machine classifiers outperforms single support vector machine classifier and other computational methods such as PAIL and LysAcet on the problem of predicting acetylation lysine sites. The resulting method has been implemented in EnsemblePail, a web server for lysine acetylation sites prediction available at http://www.aporc.org/EnsemblePail/. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  17. High resolution statistical downscaling of the EUROSIP seasonal prediction. Application for southeastern Romania

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Dumitrescu, Alexandru; Dumitrache, Rodica; Iriza, Amalia

    2017-04-01

    Seasonal climate forecasts in Europe are currently issued at the European Centre for Medium-Range Weather Forecasts (ECMWF) in the form of multi-model ensemble predictions available within the "EUROSIP" system. Different statistical techniques to calibrate, downscale and combine the EUROSIP direct model output are used to optimize the quality of the final probabilistic forecasts. In this study, a statistical downscaling model (SDM) based on canonical correlation analysis (CCA) is used to downscale the EUROSIP seasonal forecast at a spatial resolution of 1km x 1km over the Movila farm placed in southeastern Romania. This application is achieved in the framework of the H2020 MOSES project (http://www.moses-project.eu). The combination between monthly standardized values of three climate variables (maximum/minimum temperatures-Tmax/Tmin, total precipitation-Prec) is used as predictand while combinations of various large-scale predictors are tested in terms of their availability as outputs in the seasonal EUROSIP probabilistic forecasting (sea level pressure, temperature at 850 hPa and geopotential height at 500 hPa). The predictors are taken from the ECMWF system considering 15 members of the ensemble, for which the hindcasts since 1991 until present are available. The model was calibrated over the period 1991-2014 and predictions for summers 2015 and 2016 were achieved. The calibration was made for the ensemble average as well as for each ensemble member. The model was developed for each lead time: one month anticipation for June, two months anticipation for July and three months anticipation for August. The main conclusions from these preliminary results are: best predictions (in terms of the anomaly sign) for Tmax (July-2 months anticipation, August-3 months anticipation) for both years (2015, 2016); for Tmin - good predictions only for August (3 months anticipation ) for both years; for precipitation, good predictions for July (2 months anticipation) in 2015 and August (3 months anticipation) in 2016; failed prediction for June (1-month anticipation) for all parameters. To see if the results obtained for 2015 and 2016 summers are in agreement with the general ECMWF model performance in forecast of the three predictors used in the CCA SDM calibration, the mean bias and root mean square errors (RMSE) calculated over the entire period in each grid point, for each ensemble member and ensemble average were computed. The obtained results are confirmed, showing highest ECMWF performance in forecasting of the three predictors for 3 months anticipation (August) and lowest performance for one month anticipation (June). The added value of the CCA SDM in forecasting local Tmax/Tmin and total precipitation was compared to the ECMWF performance using nearest grid point method. Comparisons were performed for the 1991-2014 period, taking into account the forecast made in May for July. An important improvement was found for the CCA SDM predictions in terms of the RMSE value (computed against observations) for Tmax/Tmin and less for precipitation. The tests are in progress for the other summer months (June, July).

  18. Complete Hamiltonian analysis of cosmological perturbations at all orders II: non-canonical scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2016-10-01

    In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [1] to non-canonical scalar field and obtain an unique expression of speed of sound in terms of phase-space variable. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that ourmore » approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.« less

  19. A comparison of ensemble post-processing approaches that preserve correlation structures

    NASA Astrophysics Data System (ADS)

    Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  20. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  1. Desynchronization in an ensemble of globally coupled chaotic bursting neuronal oscillators by dynamic delayed feedback control

    NASA Astrophysics Data System (ADS)

    Che, Yanqiu; Yang, Tingting; Li, Ruixue; Li, Huiyan; Han, Chunxiao; Wang, Jiang; Wei, Xile

    2015-09-01

    In this paper, we propose a dynamic delayed feedback control approach or desynchronization of chaotic-bursting synchronous activities in an ensemble of globally coupled neuronal oscillators. We demonstrate that the difference signal between an ensemble's mean field and its time delayed state, filtered and fed back to the ensemble, can suppress the self-synchronization in the ensemble. These individual units are decoupled and stabilized at the desired desynchronized states while the stimulation signal reduces to the noise level. The effectiveness of the method is illustrated by examples of two different populations of globally coupled chaotic-bursting neurons. The proposed method has potential for mild, effective and demand-controlled therapy of neurological diseases characterized by pathological synchronization.

  2. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.

    PubMed

    Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H

    2017-01-01

    Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.

  3. Advanced ensemble modelling of flexible macromolecules using X-ray solution scattering.

    PubMed

    Tria, Giancarlo; Mertens, Haydyn D T; Kachala, Michael; Svergun, Dmitri I

    2015-03-01

    Dynamic ensembles of macromolecules mediate essential processes in biology. Understanding the mechanisms driving the function and molecular interactions of 'unstructured' and flexible molecules requires alternative approaches to those traditionally employed in structural biology. Small-angle X-ray scattering (SAXS) is an established method for structural characterization of biological macromolecules in solution, and is directly applicable to the study of flexible systems such as intrinsically disordered proteins and multi-domain proteins with unstructured regions. The Ensemble Optimization Method (EOM) [Bernadó et al. (2007 ▶). J. Am. Chem. Soc. 129, 5656-5664] was the first approach introducing the concept of ensemble fitting of the SAXS data from flexible systems. In this approach, a large pool of macromolecules covering the available conformational space is generated and a sub-ensemble of conformers coexisting in solution is selected guided by the fit to the experimental SAXS data. This paper presents a series of new developments and advancements to the method, including significantly enhanced functionality and also quantitative metrics for the characterization of the results. Building on the original concept of ensemble optimization, the algorithms for pool generation have been redesigned to allow for the construction of partially or completely symmetric oligomeric models, and the selection procedure was improved to refine the size of the ensemble. Quantitative measures of the flexibility of the system studied, based on the characteristic integral parameters of the selected ensemble, are introduced. These improvements are implemented in the new EOM version 2.0, and the capabilities as well as inherent limitations of the ensemble approach in SAXS, and of EOM 2.0 in particular, are discussed.

  4. An extensive study of Bose-Einstein condensation in liquid helium using Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-05-01

    Realistic scenario can be represented by general canonical ensemble way better than the ideal one, with proper parameter sets involved. We study the Bose-Einstein condensation phenomena of liquid helium within the framework of Tsallis statistics. With a comparatively high value of the deformation parameter q(∼ 1 . 4) , the theoretically calculated value of the critical temperature (Tc) of the phase transition of liquid helium is found to agree with the experimentally determined value (Tc = 2 . 17 K), although they differs from each other for q = 1 (undeformed scenario). This throws a light on the understanding of the phenomenon and connects temperature fluctuation(non-equilibrium conditions) with the interactions between atoms qualitatively. More interactions between atoms give rise to more non-equilibrium conditions which is as expected.

  5. Quantum correlations in chiral graphene nanoribbons.

    PubMed

    Tan, Xiao-Dong; Koop, Cornelie; Liao, Xiao-Ping; Sun, Litao

    2016-11-02

    We compute the entanglement and the quantum discord (QD) between two edge spins in chiral graphene nanoribbons (CGNRs) thermalized with a reservoir at temperature T (canonical ensemble). We show that the entanglement only exists in inter-edge coupled spin pairs, and there is no entanglement between any two spins at the same ribbon edge. By contrast, almost all edge spin pairs can hold non-zero QD, which strongly depends on the ribbon width and the Coulomb repulsion among electrons. More intriguingly, the dominant entanglement always occurs in the pair of nearest abreast spins across the ribbon, and even at room temperature this type of entanglement is still very robust, especially for narrow CGNRs with the weak Coulomb repulsion. These remarkable properties make CGNRs very promising for possible applications in spin-quantum devices.

  6. Negative specific heat of black-holes from fluid-gravity correspondence

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Swastik; Shankaranarayanan, S.

    2017-04-01

    Black holes in asymptotically flat space-times have negative specific heat—they get hotter as they loose energy. A clear statistical mechanical understanding of this has remained a challenge. In this work, we address this issue using fluid-gravity correspondence which aims to associate fluid degrees of freedom to the horizon. Using linear response theory and the teleological nature of event horizon, we show explicitly that the fluctuations of the horizon-fluid lead to negative specific heat for a Schwarzschild black Hole. We also point out how the specific heat can be positive for Kerr-Newman or AdS black holes. Our approach constitutes an important advance as it allows us to apply the canonical ensemble approach to study thermodynamics of asymptotically flat black hole space-times.

  7. Rigorous proof for the nonlocal correlation function in the transverse Ising model with ring frustration.

    PubMed

    Dong, Jian-Jun; Zheng, Zhen-Yu; Li, Peng

    2018-01-01

    An unusual correlation function was conjectured by Campostrini et al. [Phys. Rev. E 91, 042123 (2015)PLEEE81539-375510.1103/PhysRevE.91.042123] for the ground state of a transverse Ising chain with geometrical frustration. Later, we provided a rigorous proof for it and demonstrated its nonlocal nature based on an evaluation of a Toeplitz determinant in the thermodynamic limit [J. Stat. Mech. (2016) 11310210.1088/1742-5468/2016/11/113102]. In this paper, we further prove that all the low excited energy states forming the gapless kink phase share the same asymptotic correlation function with the ground state. As a consequence, the thermal correlation function almost remains constant at low temperatures if one assumes a canonical ensemble.

  8. Study of the statistical physics bases on superstatistics from the β-fluctuated to the T-fluctuated form

    NASA Astrophysics Data System (ADS)

    Sargolzaeipor, S.; Hassanabadi, H.; Chung, W. S.

    2018-04-01

    In this paper, we study the T -fluctuated form of superstatistics. In this form, some thermodynamic quantities such as the Helmholtz energy, the entropy and the internal energy, are expressed in terms of the T -fluctuated form for a canonical ensemble. In addition, the partition functions in the formalism for 2-level and 3-level distributions are derived. Then we make use of the T -fluctuated superstatistics for a quantum harmonic oscillator problem and the thermal properties of the system for three statistics of the Bose-Einstein, Maxwell-Boltzmann and Fermi-Dirac statistics are calculated. The effect of the deformation parameter on these properties is examined. All the results recover the well-known results by removing the deformation parameter.

  9. Thermodynamics of the adsorption of flexible polymers on nanowires

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Gross, Jonathan; Bachmann, Michael

    2015-03-01

    Generalized-ensemble simulations enable the study of complex adsorption scenarios of a coarse-grained model polymer near an attractive nanostring, representing an ultrathin nanowire. We perform canonical and microcanonical statistical analyses to investigate structural transitions of the polymer and discuss their dependence on the temperature and on model parameters such as effective wire thickness and attraction strength. The result is a complete hyperphase diagram of the polymer phases, whose locations and stability are influenced by the effective material properties of the nanowire and the strength of the thermal fluctuations. Major structural polymer phases in the adsorbed state include compact droplets attached to or wrapping around the wire, and tubelike conformations with triangular pattern that resemble ideal boron nanotubes. The classification of the transitions is performed by microcanonical inflection-point analysis.

  10. Thermodynamics of the adsorption of flexible polymers on nanowires.

    PubMed

    Vogel, Thomas; Gross, Jonathan; Bachmann, Michael

    2015-03-14

    Generalized-ensemble simulations enable the study of complex adsorption scenarios of a coarse-grained model polymer near an attractive nanostring, representing an ultrathin nanowire. We perform canonical and microcanonical statistical analyses to investigate structural transitions of the polymer and discuss their dependence on the temperature and on model parameters such as effective wire thickness and attraction strength. The result is a complete hyperphase diagram of the polymer phases, whose locations and stability are influenced by the effective material properties of the nanowire and the strength of the thermal fluctuations. Major structural polymer phases in the adsorbed state include compact droplets attached to or wrapping around the wire, and tubelike conformations with triangular pattern that resemble ideal boron nanotubes. The classification of the transitions is performed by microcanonical inflection-point analysis.

  11. Virial coefficients and demixing in the Asakura-Oosawa model.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz

    2015-01-07

    The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.

  12. Statistical Mechanical Derivation of Jarzynski's Identity for Thermostated Non-Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Cuendet, Michel A.

    2006-03-01

    The recent Jarzynski identity (JI) relates thermodynamic free energy differences to nonequilibrium work averages. Several proofs of the JI have been provided on the thermodynamic level. They rely on assumptions such as equivalence of ensembles in the thermodynamic limit or weakly coupled infinite heat baths. However, the JI is widely applied to NVT computer simulations involving finite numbers of particles, whose equations of motion are strongly coupled to a few extra degrees of freedom modeling a thermostat. In this case, the above assumptions are no longer valid. We propose a statistical mechanical approach to the JI solely based on the specific equations of motion, without any further assumption. We provide a detailed derivation for the non-Hamiltonian Nosé-Hoover dynamics, which is routinely used in computer simulations to produce canonical sampling.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, William; Wong, Gabriel

    What is the meaning of entanglement in a theory of extended objects such as strings? To address this question we consider the spatial entanglement between two intervals in the Gross-Taylor model, the string theory dual to two-dimensional Yang-Mills theory at large N. The string diagrams that contribute to the entanglement entropy describe open strings with endpoints anchored to the entangling surface, as first argued by Susskind. We develop a canonical theory of these open strings, and describe how closed strings are divided into open strings at the level of the Hilbert space. Here, we derive the modular Hamiltonian for themore » Hartle-Hawking state and show that the corresponding reduced density matrix describes a thermal ensemble of open strings ending on an object at the entangling surface that we call an entanglement brane, or E-brane.« less

  14. Tsallis thermostatistics for finite systems: a Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.; Moreira, Andrã© A.; Andrade, José S., Jr.; Almeida, Murilo P.

    2003-05-01

    The derivation of the Tsallis generalized canonical distribution from the traditional approach of the Gibbs microcanonical ensemble is revisited (Phys. Lett. A 193 (1994) 140). We show that finite systems whose Hamiltonians obey a generalized homogeneity relation rigorously follow the nonextensive thermostatistics of Tsallis. In the thermodynamical limit, however, our results indicate that the Boltzmann-Gibbs statistics is always recovered, regardless of the type of potential among interacting particles. This approach provides, moreover, a one-to-one correspondence between the generalized entropy and the Hamiltonian structure of a wide class of systems, revealing a possible origin for the intrinsic nonlinear features present in the Tsallis formalism that lead naturally to power-law behavior. Finally, we confirm these exact results through extensive numerical simulations of the Fermi-Pasta-Ulam chain of anharmonic oscillators.

  15. Basic Brackets of a 2D Model for the Hodge Theory Without its Canonical Conjugate Momenta

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Gupta, S.; Malik, R. P.

    2016-06-01

    We deduce the canonical brackets for a two (1+1)-dimensional (2D) free Abelian 1-form gauge theory by exploiting the beauty and strength of the continuous symmetries of a Becchi-Rouet-Stora-Tyutin (BRST) invariant Lagrangian density that respects, in totality, six continuous symmetries. These symmetries entail upon this model to become a field theoretic example of Hodge theory. Taken together, these symmetries enforce the existence of exactly the same canonical brackets amongst the creation and annihilation operators that are found to exist within the standard canonical quantization scheme. These creation and annihilation operators appear in the normal mode expansion of the basic fields of this theory. In other words, we provide an alternative to the canonical method of quantization for our present model of Hodge theory where the continuous internal symmetries play a decisive role. We conjecture that our method of quantization is valid for a class of field theories that are tractable physical examples for the Hodge theory. This statement is true in any arbitrary dimension of spacetime.

  16. Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Chjan

    Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-bodymore » flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.« less

  17. Computational study of configurational and vibrational contributions to the thermodynamics of substitutional alloys: The case of Ni3Al

    NASA Astrophysics Data System (ADS)

    Michelon, M. F.; Antonelli, A.

    2010-03-01

    We have developed a methodology to study the thermodynamics of order-disorder transformations in n -component substitutional alloys that combines nonequilibrium methods, which can efficiently compute free energies, with Monte Carlo simulations, in which configurational and vibrational degrees of freedom are simultaneously considered on an equal footing basis. Furthermore, with this methodology one can easily perform simulations in the canonical and in the isobaric-isothermal ensembles, which allow the investigation of the bulk volume effect. We have applied this methodology to calculate configurational and vibrational contributions to the entropy of the Ni3Al alloy as functions of temperature. The simulations show that when the volume of the system is kept constant, the vibrational entropy does not change upon transition while constant-pressure calculations indicate that the volume increase at the order-disorder transition causes a vibrational entropy increase of 0.08kB/atom . This is significant when compared to the configurational entropy increase of 0.27kB/atom . Our calculations also indicate that the inclusion of vibrations reduces in about 30% the order-disorder transition temperature determined solely considering the configurational degrees of freedom.

  18. Test of quantum thermalization in the two-dimensional transverse-field Ising model

    PubMed Central

    Blaß, Benjamin; Rieger, Heiko

    2016-01-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523

  19. pE-DB: a database of structural ensembles of intrinsically disordered and of unfolded proteins.

    PubMed

    Varadi, Mihaly; Kosol, Simone; Lebrun, Pierre; Valentini, Erica; Blackledge, Martin; Dunker, A Keith; Felli, Isabella C; Forman-Kay, Julie D; Kriwacki, Richard W; Pierattelli, Roberta; Sussman, Joel; Svergun, Dmitri I; Uversky, Vladimir N; Vendruscolo, Michele; Wishart, David; Wright, Peter E; Tompa, Peter

    2014-01-01

    The goal of pE-DB (http://pedb.vib.be) is to serve as an openly accessible database for the deposition of structural ensembles of intrinsically disordered proteins (IDPs) and of denatured proteins based on nuclear magnetic resonance spectroscopy, small-angle X-ray scattering and other data measured in solution. Owing to the inherent flexibility of IDPs, solution techniques are particularly appropriate for characterizing their biophysical properties, and structural ensembles in agreement with these data provide a convenient tool for describing the underlying conformational sampling. Database entries consist of (i) primary experimental data with descriptions of the acquisition methods and algorithms used for the ensemble calculations, and (ii) the structural ensembles consistent with these data, provided as a set of models in a Protein Data Bank format. PE-DB is open for submissions from the community, and is intended as a forum for disseminating the structural ensembles and the methodologies used to generate them. While the need to represent the IDP structures is clear, methods for determining and evaluating the structural ensembles are still evolving. The availability of the pE-DB database is expected to promote the development of new modeling methods and leads to a better understanding of how function arises from disordered states.

  20. Comparing ensemble learning methods based on decision tree classifiers for protein fold recognition.

    PubMed

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi

    2014-01-01

    In this paper, some methods for ensemble learning of protein fold recognition based on a decision tree (DT) are compared and contrasted against each other over three datasets taken from the literature. According to previously reported studies, the features of the datasets are divided into some groups. Then, for each of these groups, three ensemble classifiers, namely, random forest, rotation forest and AdaBoost.M1 are employed. Also, some fusion methods are introduced for combining the ensemble classifiers obtained in the previous step. After this step, three classifiers are produced based on the combination of classifiers of types random forest, rotation forest and AdaBoost.M1. Finally, the three different classifiers achieved are combined to make an overall classifier. Experimental results show that the overall classifier obtained by the genetic algorithm (GA) weighting fusion method, is the best one in comparison to previously applied methods in terms of classification accuracy.

  1. Insights into Watson-Crick/Hoogsteen breathing dynamics and damage repair from the solution structure and dynamic ensemble of DNA duplexes containing m1A.

    PubMed

    Sathyamoorthy, Bharathwaj; Shi, Honglue; Zhou, Huiqing; Xue, Yi; Rangadurai, Atul; Merriman, Dawn K; Al-Hashimi, Hashim M

    2017-05-19

    In the canonical DNA double helix, Watson-Crick (WC) base pairs (bps) exist in dynamic equilibrium with sparsely populated (∼0.02-0.4%) and short-lived (lifetimes ∼0.2-2.5 ms) Hoogsteen (HG) bps. To gain insights into transient HG bps, we used solution-state nuclear magnetic resonance spectroscopy, including measurements of residual dipolar couplings and molecular dynamics simulations, to examine how a single HG bp trapped using the N1-methylated adenine (m1A) lesion affects the structural and dynamic properties of two duplexes. The solution structure and dynamic ensembles of the duplexes reveals that in both cases, m1A forms a m1A•T HG bp, which is accompanied by local and global structural and dynamic perturbations in the double helix. These include a bias toward the BI backbone conformation; sugar repuckering, major-groove directed kinking (∼9°); and local melting of neighboring WC bps. These results provide atomic insights into WC/HG breathing dynamics in unmodified DNA duplexes as well as identify structural and dynamic signatures that could play roles in m1A recognition and repair. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Insights into Watson–Crick/Hoogsteen breathing dynamics and damage repair from the solution structure and dynamic ensemble of DNA duplexes containing m1A

    PubMed Central

    Sathyamoorthy, Bharathwaj; Shi, Honglue; Zhou, Huiqing; Xue, Yi; Rangadurai, Atul; Merriman, Dawn K.

    2017-01-01

    Abstract In the canonical DNA double helix, Watson–Crick (WC) base pairs (bps) exist in dynamic equilibrium with sparsely populated (∼0.02–0.4%) and short-lived (lifetimes ∼0.2–2.5 ms) Hoogsteen (HG) bps. To gain insights into transient HG bps, we used solution-state nuclear magnetic resonance spectroscopy, including measurements of residual dipolar couplings and molecular dynamics simulations, to examine how a single HG bp trapped using the N1-methylated adenine (m1A) lesion affects the structural and dynamic properties of two duplexes. The solution structure and dynamic ensembles of the duplexes reveals that in both cases, m1A forms a m1A•T HG bp, which is accompanied by local and global structural and dynamic perturbations in the double helix. These include a bias toward the BI backbone conformation; sugar repuckering, major-groove directed kinking (∼9°); and local melting of neighboring WC bps. These results provide atomic insights into WC/HG breathing dynamics in unmodified DNA duplexes as well as identify structural and dynamic signatures that could play roles in m1A recognition and repair. PMID:28369571

  3. Differential Enzyme Flexibility Probed Using Solid-State Nanopores.

    PubMed

    Hu, Rui; Rodrigues, João V; Waduge, Pradeep; Yamazaki, Hirohito; Cressiot, Benjamin; Chishti, Yasmin; Makowski, Lee; Yu, Dapeng; Shakhnovich, Eugene; Zhao, Qing; Wanunu, Meni

    2018-05-22

    Enzymes and motor proteins are dynamic macromolecules that coexist in a number of conformations of similar energies. Protein function is usually accompanied by a change in structure and flexibility, often induced upon binding to ligands. However, while measuring protein flexibility changes between active and resting states is of therapeutic significance, it remains a challenge. Recently, our group has demonstrated that breadth of signal amplitudes in measured electrical signatures as an ensemble of individual protein molecules is driven through solid-state nanopores and correlates with protein conformational dynamics. Here, we extend our study to resolve subtle flexibility variation in dihydrofolate reductase mutants from unlabeled single molecules in solution. We first demonstrate using a canonical protein system, adenylate kinase, that both size and flexibility changes can be observed upon binding to a substrate that locks the protein in a closed conformation. Next, we investigate the influence of voltage bias and pore geometry on the measured electrical pulse statistics during protein transport. Finally, using the optimal experimental conditions, we systematically study a series of wild-type and mutant dihydrofolate reductase proteins, finding a good correlation between nanopore-measured protein conformational dynamics and equilibrium bulk fluorescence probe measurements. Our results unequivocally demonstrate that nanopore-based measurements reliably probe conformational diversity in native protein ensembles.

  4. Bose-Einstein condensation of triplons with a weakly broken U(1) symmetry

    NASA Astrophysics Data System (ADS)

    Khudoyberdiev, Asliddin; Rakhimov, Abdulla; Schilling, Andreas

    2017-11-01

    The low-temperature properties of certain quantum magnets can be described in terms of a Bose-Einstein condensation (BEC) of magnetic quasiparticles (triplons). Some mean-field approaches (MFA) to describe these systems, based on the standard grand canonical ensemble, do not take the anomalous density into account and leads to an internal inconsistency, as it has been shown by Hohenberg and Martin, and may therefore produce unphysical results. Moreover, an explicit breaking of the U(1) symmetry as observed, for example, in TlCuCl3 makes the application of MFA more complicated. In the present work, we develop a self-consistent MFA approach, similar to the Hartree-Fock-Bogolyubov approximation in the notion of representative statistical ensembles, including the effect of a weakly broken U(1) symmetry. We apply our results on experimental data of the quantum magnet TlCuCl3 and show that magnetization curves and the energy dispersion can be well described within this approximation assuming that the BEC scenario is still valid. We predict that the shift of the critical temperature T c due to a finite exchange anisotropy is rather substantial even when the anisotropy parameter γ is small, e.g., {{Δ }}{T}{c}≈ 10 % of T c in H = 6 T and for γ ≈ 4 μ {eV}.

  5. Revealing Risks in Adaptation Planning: expanding Uncertainty Treatment and dealing with Large Projection Ensembles during Planning Scenario development

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.

    2015-12-01

    Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.

  6. Algorithms that Defy the Gravity of Learning Curve

    DTIC Science & Technology

    2017-04-28

    three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the

  7. Fire spread estimation on forest wildfire using ensemble kalman filter

    NASA Astrophysics Data System (ADS)

    Syarifah, Wardatus; Apriliani, Erna

    2018-04-01

    Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.

  8. A Bayesian Ensemble Approach for Epidemiological Projections

    PubMed Central

    Lindström, Tom; Tildesley, Michael; Webb, Colleen

    2015-01-01

    Mathematical models are powerful tools for epidemiology and can be used to compare control actions. However, different models and model parameterizations may provide different prediction of outcomes. In other fields of research, ensemble modeling has been used to combine multiple projections. We explore the possibility of applying such methods to epidemiology by adapting Bayesian techniques developed for climate forecasting. We exemplify the implementation with single model ensembles based on different parameterizations of the Warwick model run for the 2001 United Kingdom foot and mouth disease outbreak and compare the efficacy of different control actions. This allows us to investigate the effect that discrepancy among projections based on different modeling assumptions has on the ensemble prediction. A sensitivity analysis showed that the choice of prior can have a pronounced effect on the posterior estimates of quantities of interest, in particular for ensembles with large discrepancy among projections. However, by using a hierarchical extension of the method we show that prior sensitivity can be circumvented. We further extend the method to include a priori beliefs about different modeling assumptions and demonstrate that the effect of this can have different consequences depending on the discrepancy among projections. We propose that the method is a promising analytical tool for ensemble modeling of disease outbreaks. PMID:25927892

  9. Training set extension for SVM ensemble in P300-speller with familiar face paradigm.

    PubMed

    Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou

    2018-03-27

    P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.

  10. Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.

    PubMed

    Kilic, Niyazi; Hosgormez, Erkan

    2016-03-01

    Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.

  11. Restoring canonical partition functions from imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.

    2018-03-01

    Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.

  12. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  13. IDM-PhyChm-Ens: intelligent decision-making ensemble methodology for classification of human breast cancer using physicochemical properties of amino acids.

    PubMed

    Ali, Safdar; Majid, Abdul; Khan, Asifullah

    2014-04-01

    Development of an accurate and reliable intelligent decision-making method for the construction of cancer diagnosis system is one of the fast growing research areas of health sciences. Such decision-making system can provide adequate information for cancer diagnosis and drug discovery. Descriptors derived from physicochemical properties of protein sequences are very useful for classifying cancerous proteins. Recently, several interesting research studies have been reported on breast cancer classification. To this end, we propose the exploitation of the physicochemical properties of amino acids in protein primary sequences such as hydrophobicity (Hd) and hydrophilicity (Hb) for breast cancer classification. Hd and Hb properties of amino acids, in recent literature, are reported to be quite effective in characterizing the constituent amino acids and are used to study protein foldings, interactions, structures, and sequence-order effects. Especially, using these physicochemical properties, we observed that proline, serine, tyrosine, cysteine, arginine, and asparagine amino acids offer high discrimination between cancerous and healthy proteins. In addition, unlike traditional ensemble classification approaches, the proposed 'IDM-PhyChm-Ens' method was developed by combining the decision spaces of a specific classifier trained on different feature spaces. The different feature spaces used were amino acid composition, split amino acid composition, and pseudo amino acid composition. Consequently, we have exploited different feature spaces using Hd and Hb properties of amino acids to develop an accurate method for classification of cancerous protein sequences. We developed ensemble classifiers using diverse learning algorithms such as random forest (RF), support vector machines (SVM), and K-nearest neighbor (KNN) trained on different feature spaces. We observed that ensemble-RF, in case of cancer classification, performed better than ensemble-SVM and ensemble-KNN. Our analysis demonstrates that ensemble-RF, ensemble-SVM and ensemble-KNN are more effective than their individual counterparts. The proposed 'IDM-PhyChm-Ens' method has shown improved performance compared to existing techniques.

  14. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers

    PubMed Central

    2012-01-01

    Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969

  15. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.

    PubMed

    Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T

    2012-12-08

    Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.

  16. Avoiding the ensemble decorrelation problem using member-by-member post-processing

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2014-05-01

    Forecast calibration or post-processing has become a standard tool in atmospheric and climatological science due to the presence of systematic initial condition and model errors. For ensemble forecasts the most competitive methods derive from the assumption of a fixed ensemble distribution. However, when independently applying such 'statistical' methods at different locations, lead times or for multiple variables the correlation structure for individual ensemble members is destroyed. Instead of reastablishing the correlation structure as in Schefzik et al. (2013) we instead propose a calibration method that avoids such problem by correcting each ensemble member individually. Moreover, we analyse the fundamental mechanisms by which the probabilistic ensemble skill can be enhanced. In terms of continuous ranked probability score, our member-by-member approach amounts to skill gain that extends for lead times far beyond the error doubling time and which is as good as the one of the most competitive statistical approach, non-homogeneous Gaussian regression (Gneiting et al. 2005). Besides the conservation of correlation structure, additional benefits arise including the fact that higher-order ensemble moments like kurtosis and skewness are inherited from the uncorrected forecasts. Our detailed analysis is performed in the context of the Kuramoto-Sivashinsky equation and different simple models but the results extent succesfully to the ensemble forecast of the European Centre for Medium-Range Weather Forecasts (Van Schaeybroeck and Vannitsem, 2013, 2014) . References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Schefzik, R., T.L. Thorarinsdottir, and T. Gneiting, 2013: Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling. To appear in Statistical Science 28. [3] Van Schaeybroeck, B., and S. Vannitsem, 2013: Reliable probabilities through statistical post-processing of ensemble forecasts. Proceedings of the European Conference on Complex Systems 2012, Springer proceedings on complexity, XVI, p. 347-352. [4] Van Schaeybroeck, B., and S. Vannitsem, 2014: Ensemble post-processing using member-by-member approaches: theoretical aspects, under review.

  17. The H2A-H2B dimeric kinetic intermediate is stabilized by widespread hydrophobic burial with few fully native interactions.

    PubMed

    Guyett, Paul J; Gloss, Lisa M

    2012-01-20

    The H2A-H2B histone heterodimer folds via monomeric and dimeric kinetic intermediates. Within ∼5 ms, the H2A and H2B polypeptides associate in a nearly diffusion limited reaction to form a dimeric ensemble, denoted I₂ and I₂*, the latter being a subpopulation characterized by a higher content of nonnative structure (NNS). The I₂ ensemble folds to the native heterodimer, N₂, through an observable, first-order kinetic phase. To determine the regions of structure in the I₂ ensemble, we characterized 26 Ala mutants of buried hydrophobic residues, spanning the three helices of the canonical histone folds of H2A and H2B and the H2B C-terminal helix. All but one targeted residue contributed significantly to the stability of I₂, the transition state and N₂; however, only residues in the hydrophobic core of the dimer interface perturbed the I₂* population. Destabilization of I₂* correlated with slower folding rates, implying that NNS is not a kinetic trap but rather accelerates folding. The pattern of Φ values indicated that residues forming intramolecular interactions in the peripheral helices contributed similar stability to I₂ and N₂, but residues involved in intermolecular interactions in the hydrophobic core are only partially folded in I₂. These findings suggest a dimerize-then-rearrange model. Residues throughout the histone fold contribute to the stability of I₂, but after the rapid dimerization reaction, the hydrophobic core of the dimer interface has few fully native interactions. In the transition state leading to N₂, more native-like interactions are developed and nonnative interactions are rearranged. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Conformational Ensembles of Calmodulin Revealed by Nonperturbing Site-Specific Vibrational Probe Groups.

    PubMed

    Kelly, Kristen L; Dalton, Shannon R; Wai, Rebecca B; Ramchandani, Kanika; Xu, Rosalind J; Linse, Sara; Londergan, Casey H

    2018-03-22

    Seven native residues on the regulatory protein calmodulin, including three key methionine residues, were replaced (one by one) by the vibrational probe amino acid cyanylated cysteine, which has a unique CN stretching vibration that reports on its local environment. Almost no perturbation was caused by this probe at any of the seven sites, as reported by CD spectra of calcium-bound and apo calmodulin and binding thermodynamics for the formation of a complex between calmodulin and a canonical target peptide from skeletal muscle myosin light chain kinase measured by isothermal titration. The surprising lack of perturbation suggests that this probe group could be applied directly in many protein-protein binding interfaces. The infrared absorption bands for the probe groups reported many dramatic changes in the probes' local environments as CaM went from apo- to calcium-saturated to target peptide-bound conditions, including large frequency shifts and a variety of line shapes from narrow (interpreted as a rigid and invariant local environment) to symmetric to broad and asymmetric (likely from multiple coexisting and dynamically exchanging structures). The fast intrinsic time scale of infrared spectroscopy means that the line shapes report directly on site-specific details of calmodulin's variable structural distribution. Though quantitative interpretation of the probe line shapes depends on a direct connection between simulated ensembles and experimental data that does not yet exist, formation of such a connection to data such as that reported here would provide a new way to evaluate conformational ensembles from data that directly contains the structural distribution. The calmodulin probe sites developed here will also be useful in evaluating the binding mode of calmodulin with many uncharacterized regulatory targets.

  19. Locally Weighted Ensemble Clustering.

    PubMed

    Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang

    2018-05-01

    Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.

  20. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  1. Canons in Harmony, or Canons in Conflict: A Cultural Perspective on the Curriculum and Pedagogy of Jazz Improvization

    ERIC Educational Resources Information Center

    Prouty, Kenneth E.

    2004-01-01

    This essay examines how jazz educators construct methods for teaching the art of improvisation in institutionalized jazz studies programs. Unlike previous studies of the processes and philosophies of jazz instruction, I examine such processes from a cultural standpoint, to identify why certain methods might be favored over others. Specifically,…

  2. Integrating Multiple Criteria in Selection Procedures for Improving Student Quality and Reducing Cost Per Graduate. AIR Forum 1979 Paper.

    ERIC Educational Resources Information Center

    Jones, Gerald L.; Westen, Risdon J.

    The multivariate approach of canonical correlation was used to assess selection procedures of the Air Force Academy. It was felt that improved student selection methods might reduce the number of dropouts while maintaining or improving the quality of graduates. The method of canonical correlation was designed to maximize prediction of academic…

  3. New approach to information fusion for Lipschitz classifiers ensembles: Application in multi-channel C-OTDR-monitoring systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, Andrey V.; Egorov, Dmitry V.

    This paper presents new results concerning selection of an optimal information fusion formula for an ensemble of Lipschitz classifiers. The goal of information fusion is to create an integral classificatory which could provide better generalization ability of the ensemble while achieving a practically acceptable level of effectiveness. The problem of information fusion is very relevant for data processing in multi-channel C-OTDR-monitoring systems. In this case we have to effectively classify targeted events which appear in the vicinity of the monitored object. Solution of this problem is based on usage of an ensemble of Lipschitz classifiers each of which corresponds tomore » a respective channel. We suggest a brand new method for information fusion in case of ensemble of Lipschitz classifiers. This method is called “The Weighing of Inversely as Lipschitz Constants” (WILC). Results of WILC-method practical usage in multichannel C-OTDR monitoring systems are presented.« less

  4. Building Diversified Multiple Trees for classification in high dimensional noisy biomedical data.

    PubMed

    Li, Jiuyong; Liu, Lin; Liu, Jixue; Green, Ryan

    2017-12-01

    It is common that a trained classification model is applied to the operating data that is deviated from the training data because of noise. This paper will test an ensemble method, Diversified Multiple Tree (DMT), on its capability for classifying instances in a new laboratory using the classifier built on the instances of another laboratory. DMT is tested on three real world biomedical data sets from different laboratories in comparison with four benchmark ensemble methods, AdaBoost, Bagging, Random Forests, and Random Trees. Experiments have also been conducted on studying the limitation of DMT and its possible variations. Experimental results show that DMT is significantly more accurate than other benchmark ensemble classifiers on classifying new instances of a different laboratory from the laboratory where instances are used to build the classifier. This paper demonstrates that an ensemble classifier, DMT, is more robust in classifying noisy data than other widely used ensemble methods. DMT works on the data set that supports multiple simple trees.

  5. Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach

    NASA Astrophysics Data System (ADS)

    Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch

    2017-09-01

    A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.

  6. New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF

    NASA Astrophysics Data System (ADS)

    Cane, D.; Milelli, M.

    2009-09-01

    The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.

  7. Grand Canonical Investigation of the Quasi Liquid Layer of Ice: Is It Liquid?

    PubMed

    Pickering, Ignacio; Paleico, Martin; Sirkin, Yamila A Perez; Scherlis, Damian A; Factorovich, Matías H

    2018-05-10

    In this study, the solid-vapor equilibrium and the quasi liquid layer (QLL) of ice Ih exposing the basal and primary prismatic faces were explored by means of grand canonical molecular dynamics simulations with the monatomic mW potential. For this model, the solid-vapor equilibrium was found to follow the Clausius-Clapeyron relation in the range examined, from 250 to 270 K, with a Δ H sub of 50 kJ/mol in excellent agreement with the experimental value. The phase diagram of the mW model was constructed for the low pressure region around the triple point. The analysis of the crystallization dynamics during condensation and evaporation revealed that, for the basal face, both processes are highly activated, and in particular cubic ice is formed during condensation, producing stacking-disordered ice. The basal and primary prismatic surfaces of ice Ih were investigated at different temperatures and at their corresponding equilibrium vapor pressures. Our results show that the region known as QLL can be interpreted as the outermost layers of the solid where a partial melting takes place. Solid islands in the nanometer length scale are surrounded by interconnected liquid areas, generating a bidimensional nanophase segregation that spans throughout the entire width of the outermost layer even at 250 K. Two approaches were adopted to quantify the QLL and discussed in light of their ability to reflect this nanophase segregation phenomena. Our results in the μVT ensemble were compared with NPT and NVT simulations for two system sizes. No significant differences were found between the results as a consequence of model system size or of the working ensemble. Nevertheless, certain advantages of performing μVT simulations in order to reproduce the experimental situation are highlighted. On the one hand, the QLL thickness measured out of equilibrium might be affected because of crystallization being slower than condensation. On the other, preliminary simulations of AFM indentation experiments show that the tip can induce capillary condensation over the ice surface, enlarging the apparent QLL.

  8. Is Polar Amplification Deeper and Stronger than Dynamicists Assume?

    NASA Astrophysics Data System (ADS)

    Scheff, J.; Maroon, E.

    2017-12-01

    In the CMIP multi-model mean under strong future warming, Arctic amplification is confined to the lower troposphere, so that the meridional gradient of warming reverses around 500 mb and the upper troposphere is characterized by strong "tropical amplification" in which warming weakens with increasing latitude. This model-derived pattern of warming maxima in the upper-level tropics and lower-level Arctic has become a canonical assumption driving theories of the large-scale circulation response to climate change. Yet, several lines of evidence and reasoning suggest that Arctic amplification may in fact extend through the entire depth of the troposphere, and/or may be stronger than commonly modeled. These include satellite Microwave Sounding Unit (MSU) temperature trends as a function of latitude and vertical level, the recent discovery that the extratropical negative cloud phase feedback in models is largely spurious, and the very strong polar amplification observed in past warm and lukewarm climates. Such a warming pattern, with deep, dominant Arctic amplification, would have very different implications for the circulation than a canonical CMIP-like warming: instead of slightly shifting poleward and strengthening, eddies, jets and cells might shift equatorward and considerably weaken. Indeed, surface winds have been mysteriously weakening ("stilling") at almost all stations over the last half-century or so, there has been no poleward shift in northern hemisphere circulation metrics, and past warm climates' subtropics were apparently quite wet (and their global ocean circulations were weak.) To explore these possibilities more deeply, we examine the y-z structure of warming and circulation changes across a much broader range of models, scenarios and time periods than the CMIP future mean, and use an MSU simulator to compare them to the satellite warming record. Specifically, we examine whether the use of historical (rather than future) forcing, AMIP (rather than CMIP) configuration, individual GCMs, and/or individual ensemble members can better reproduce the structure of the MSU and surface-wind observations. Figure 1 already shows that tropical amplification is absent in the CESM1 historical ensemble (1979-2012). The results of these analyses will guide our future modeling work on these topics.

  9. Ensemble variant interpretation methods to predict enzyme activity and assign pathogenicity in the CAGI4 NAGLU (Human N-acetyl-glucosaminidase) and UBE2I (Human SUMO-ligase) challenges.

    PubMed

    Yin, Yizhou; Kundu, Kunal; Pal, Lipika R; Moult, John

    2017-09-01

    CAGI (Critical Assessment of Genome Interpretation) conducts community experiments to determine the state of the art in relating genotype to phenotype. Here, we report results obtained using newly developed ensemble methods to address two CAGI4 challenges: enzyme activity for population missense variants found in NAGLU (Human N-acetyl-glucosaminidase) and random missense mutations in Human UBE2I (Human SUMO E2 ligase), assayed in a high-throughput competitive yeast complementation procedure. The ensemble methods are effective, ranked second for SUMO-ligase and third for NAGLU, according to the CAGI independent assessors. However, in common with other methods used in CAGI, there are large discrepancies between predicted and experimental activities for a subset of variants. Analysis of the structural context provides some insight into these. Post-challenge analysis shows that the ensemble methods are also effective at assigning pathogenicity for the NAGLU variants. In the clinic, providing an estimate of the reliability of pathogenic assignments is the key. We have also used the NAGLU dataset to show that ensemble methods have considerable potential for this task, and are already reliable enough for use with a subset of mutations. © 2017 Wiley Periodicals, Inc.

  10. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  11. Instanton rate constant calculations close to and above the crossover temperature.

    PubMed

    McConnell, Sean; Kästner, Johannes

    2017-11-15

    Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2  + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble

    PubMed Central

    Liu, Hang; Chu, Renzhi; Tang, Zhenan

    2015-01-01

    Sensor drift is the most challenging problem in gas sensing at present. We propose a novel two-dimensional classifier ensemble strategy to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. This strategy is appropriate for multi-class classifiers that consist of combinations of pairwise classifiers, such as support vector machines. We compare the performance of the strategy with those of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the two-dimensional ensemble outperforms the other methods considered. Furthermore, we propose a pre-aging process inspired by that applied to the sensors to improve the stability of the classifier ensemble. The experimental results demonstrate that the weight of each multi-class classifier model in the ensemble remains fairly static before and after the addition of new classifier models to the ensemble, when a pre-aging procedure is applied. PMID:25942640

  13. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  14. Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.

    PubMed

    Kim, Eunwoo; Park, HyunWook

    2017-02-01

    The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.

  15. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  16. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  17. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  18. Project FIRES. Volume 1: Program Overview and Summary, Phase 1B

    NASA Technical Reports Server (NTRS)

    Abeles, F. J.

    1980-01-01

    Overall performance requirements and evaluation methods for firefighters protective equipment were established and published as the Protective Ensemble Performance Standards (PEPS). Current firefighters protective equipment was tested and evaluated against the PEPS requirements, and the preliminary design of a prototype protective ensemble was performed. In phase 1B, the design of the prototype ensemble was finalized. Prototype ensembles were fabricated and then subjected to a series of qualification tests which were based upon the PEPS requirements. Engineering drawings and purchase specifications were prepared for the new protective ensemble.

  19. Boronlectin/Polyelectrolyte Ensembles as Artificial Tongue: Design, Construction, and Application for Discriminative Sensing of Complex Glycoconjugates from Panax ginseng.

    PubMed

    Zhang, Xiao-Tai; Wang, Shu; Xing, Guo-Wen

    2017-02-01

    Ginsenoside is a large family of triterpenoid saponins from Panax ginseng, which possesses various important biological functions. Due to the very similar structures of these complex glycoconjugates, it is crucial to develop a powerful analytic method to identify ginsenosides qualitatively or quantitatively. We herein report an eight-channel fluorescent sensor array as artificial tongue to achieve the discriminative sensing of ginsenosides. The fluorescent cross-responsive array was constructed by four boronlectins bearing flexible boronic acid moieties (FBAs) with multiple reactive sites and two linear poly(phenylene-ethynylene) (PPEs). An "on-off-on" response pattern was afforded on the basis of superquenching of fluorescent indicator PPEs and an analyte-induced allosteric indicator displacement (AID) process. Most importantly, it was found that the canonical distribution of ginsenoside data points analyzed by linear discriminant analysis (LDA) was highly correlated with the inherent molecular structures of the analytes, and the absence of overlaps among the five point groups reflected the effectiveness of the sensor array in the discrimination process. Almost all of the unknown ginsenoside samples at different concentrations were correctly identified on the basis of the established mathematical model. Our current work provided a general and constructive method to improve the quality assessment and control of ginseng and its extracts, which are useful and helpful for further discriminating other complex glycoconjugate families.

  20. Thermodynamic DFT analysis of natural gas.

    PubMed

    Neto, Abel F G; Huda, Muhammad N; Marques, Francisco C; Borges, Rosivaldo S; Neto, Antonio M J C

    2017-08-01

    Density functional theory was performed for thermodynamic predictions on natural gas, whose B3LYP/6-311++G(d,p), B3LYP/6-31+G(d), CBS-QB3, G3, and G4 methods were applied. Additionally, we carried out thermodynamic predictions using G3/G4 averaged. The calculations were performed for each major component of seven kinds of natural gas and to their respective air + natural gas mixtures at a thermal equilibrium between room temperature and the initial temperature of a combustion chamber during the injection stage. The following thermodynamic properties were obtained: internal energy, enthalpy, Gibbs free energy and entropy, which enabled us to investigate the thermal resistance of fuels. Also, we estimated an important parameter, namely, the specific heat ratio of each natural gas; this allowed us to compare the results with the empirical functions of these parameters, where the B3LYP/6-311++G(d,p) and G3/G4 methods showed better agreements. In addition, relevant information on the thermal and mechanic resistance of natural gases were investigated, as well as the standard thermodynamic properties for the combustion of natural gas. Thus, we show that density functional theory can be useful for predicting the thermodynamic properties of natural gas, enabling the production of more efficient compositions for the investigated fuels. Graphical abstract Investigation of the thermodynamic properties of natural gas through the canonical ensemble model and the density functional theory.

  1. Thermodynamics of novel charged dilatonic BTZ black holes

    NASA Astrophysics Data System (ADS)

    Dehghani, M.

    2017-10-01

    In this paper, the three-dimensional Einstein-Maxwell theory in the presence of a dilatonic scalar field has been studied. It has been shown that the dilatonic potential must be considered as the linear combination of two Liouville-type potentials. Two new classes of charged dilatonic BTZ black holes, as the exact solutions to the coupled scalar, vector and tensor field equations, have been obtained and their properties have been studied. The conserved charge and mass of the new black holes have been calculated, making use of the Gauss's law and Abbott-Deser proposal, respectively. Through comparison of the thermodynamical extensive quantities (i.e. temperature and entropy) obtained from both, the geometrical and the thermodynamical methods, the validity of the first law of black hole thermodynamics has been confirmed for both of the new black holes we just obtained. A black hole thermal stability or phase transition analysis has been performed, making use of the canonical ensemble method. Regarding the black hole heat capacity, it has been found that for either of the new black hole solutions there are some specific ranges in such a way that the black holes with the horizon radius in these ranges are locally stable. The points of type one and type two phase transitions have been determined. The black holes, with the horizon radius equal to the transition points are unstable. They undergo type one or type two phase transitions to be stabilized.

  2. Multicollinearity in canonical correlation analysis in maize.

    PubMed

    Alves, B M; Cargnelutti Filho, A; Burin, C

    2017-03-30

    The objective of this study was to evaluate the effects of multicollinearity under two methods of canonical correlation analysis (with and without elimination of variables) in maize (Zea mays L.) crop. Seventy-six maize genotypes were evaluated in three experiments, conducted in a randomized block design with three replications, during the 2009/2010 crop season. Eleven agronomic variables (number of days from sowing until female flowering, number of days from sowing until male flowering, plant height, ear insertion height, ear placement, number of plants, number of ears, ear index, ear weight, grain yield, and one thousand grain weight), 12 protein-nutritional variables (crude protein, lysine, methionine, cysteine, threonine, tryptophan, valine, isoleucine, leucine, phenylalanine, histidine, and arginine), and 6 energetic-nutritional variables (apparent metabolizable energy, apparent metabolizable energy corrected for nitrogen, ether extract, crude fiber, starch, and amylose) were measured. A phenotypic correlation matrix was first generated among the 29 variables for each of the experiments. A multicollinearity diagnosis was later performed within each group of variables using methodologies such as variance inflation factor and condition number. Canonical correlation analysis was then performed, with and without the elimination of variables, among groups of agronomic and protein-nutritional, and agronomic and energetic-nutritional variables. The canonical correlation analysis in the presence of multicollinearity (without elimination of variables) overestimates the variability of canonical coefficients. The elimination of variables is an efficient method to circumvent multicollinearity in canonical correlation analysis.

  3. New technologies for examining the role of neuronal ensembles in drug addiction and fear.

    PubMed

    Cruz, Fabio C; Koya, Eisuke; Guez-Barber, Danielle H; Bossert, Jennifer M; Lupica, Carl R; Shaham, Yavin; Hope, Bruce T

    2013-11-01

    Correlational data suggest that learned associations are encoded within neuronal ensembles. However, it has been difficult to prove that neuronal ensembles mediate learned behaviours because traditional pharmacological and lesion methods, and even newer cell type-specific methods, affect both activated and non-activated neurons. In addition, previous studies on synaptic and molecular alterations induced by learning did not distinguish between behaviourally activated and non-activated neurons. Here, we describe three new approaches--Daun02 inactivation, FACS sorting of activated neurons and Fos-GFP transgenic rats--that have been used to selectively target and study activated neuronal ensembles in models of conditioned drug effects and relapse. We also describe two new tools--Fos-tTA transgenic mice and inactivation of CREB-overexpressing neurons--that have been used to study the role of neuronal ensembles in conditioned fear.

  4. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  5. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  6. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  7. Fine-Tuning Your Ensemble's Jazz Style.

    ERIC Educational Resources Information Center

    Garcia, Antonio J.

    1991-01-01

    Proposes instructional strategies for directors of jazz groups, including guidelines for developing of skills necessary for good performance. Includes effective methods for positive changes in ensemble style. Addresses jazz group problems such as beat, tempo, staying in tune, wind power, and solo/ensemble lines. Discusses percussionists, bassists,…

  8. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification.

    PubMed

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.

  9. Quantum Gibbs ensemble Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  10. Ensemble based adaptive over-sampling method for imbalanced data learning in computer aided detection of microaneurysm.

    PubMed

    Ren, Fulong; Cao, Peng; Li, Wei; Zhao, Dazhe; Zaiane, Osmar

    2017-01-01

    Diabetic retinopathy (DR) is a progressive disease, and its detection at an early stage is crucial for saving a patient's vision. An automated screening system for DR can help in reduce the chances of complete blindness due to DR along with lowering the work load on ophthalmologists. Among the earliest signs of DR are microaneurysms (MAs). However, current schemes for MA detection appear to report many false positives because detection algorithms have high sensitivity. Inevitably some non-MAs structures are labeled as MAs in the initial MAs identification step. This is a typical "class imbalance problem". Class imbalanced data has detrimental effects on the performance of conventional classifiers. In this work, we propose an ensemble based adaptive over-sampling algorithm for overcoming the class imbalance problem in the false positive reduction, and we use Boosting, Bagging, Random subspace as the ensemble framework to improve microaneurysm detection. The ensemble based over-sampling methods we proposed combine the strength of adaptive over-sampling and ensemble. The objective of the amalgamation of ensemble and adaptive over-sampling is to reduce the induction biases introduced from imbalanced data and to enhance the generalization classification performance of extreme learning machines (ELM). Experimental results show that our ASOBoost method has higher area under the ROC curve (AUC) and G-mean values than many existing class imbalance learning methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Comparison of surface freshwater fluxes from different climate forecasts produced through different ensemble generation schemes.

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas; Wahl, Sabrina; Brune, Sebastian; Baehr, Johanna

    2016-04-01

    The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the global scale the physically motivated methods (i) and (iv) provide probabilistic hindcasts with a consistently higher reliability than the lagged initialization methods (ii)/(iii) despite the large uncertainties in the verifying observations and in the simulations.

  12. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  13. Efficient free energy calculations by combining two complementary tempering sampling methods

    NASA Astrophysics Data System (ADS)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-01

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  14. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  15. Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces

    PubMed Central

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550

  16. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  17. Evolutionary Ensemble for In Silico Prediction of Ames Test Mutagenicity

    NASA Astrophysics Data System (ADS)

    Chen, Huanhuan; Yao, Xin

    Driven by new regulations and animal welfare, the need to develop in silico models has increased recently as alternative approaches to safety assessment of chemicals without animal testing. This paper describes a novel machine learning ensemble approach to building an in silico model for the prediction of the Ames test mutagenicity, one of a battery of the most commonly used experimental in vitro and in vivo genotoxicity tests for safety evaluation of chemicals. Evolutionary random neural ensemble with negative correlation learning (ERNE) [1] was developed based on neural networks and evolutionary algorithms. ERNE combines the method of bootstrap sampling on training data with the method of random subspace feature selection to ensure diversity in creating individuals within an initial ensemble. Furthermore, while evolving individuals within the ensemble, it makes use of the negative correlation learning, enabling individual NNs to be trained as accurate as possible while still manage to maintain them as diverse as possible. Therefore, the resulting individuals in the final ensemble are capable of cooperating collectively to achieve better generalization of prediction. The empirical experiment suggest that ERNE is an effective ensemble approach for predicting the Ames test mutagenicity of chemicals.

  18. Analysis of ecosystem controls on soil carbon source-sink relationships in the northwest Great Plains

    USGS Publications Warehouse

    Tan, Z.; Liu, S.; Johnston, C.A.; Liu, J.; Tieszen, L.L.

    2006-01-01

    Our ability to forecast the role of ecosystem processes in mitigating global greenhouse effects relies on understanding the driving forces on terrestrial C dynamics. This study evaluated the controls on soil organic C (SOC) changes from 1973 to 2000 in the northwest Great Plains. SOC source-sink relationships were quantified using the General Ensemble Biogeochemical Modeling System (GEMS) based on 40 randomly located 10 × 10 km2 sample blocks. These sample blocks were aggregated into cropland, grassland, and forestland groups based on land cover composition within each sample block. Canonical correlation analysis indicated that SOC source-sink relationship from 1973 to 2000 was significantly related to the land cover type while the change rates mainly depended on the baseline SOC level and annual precipitation. Of all selected driving factors, the baseline SOC and nitrogen levels controlled the SOC change rates for the forestland and cropland groups, while annual precipitation determined the C source-sink relationship for the grassland group in which noticeable SOC sink strength was attributed to the conversion from cropped area to grass cover. Canonical correlation analysis also showed that grassland ecosystems are more complicated than others in the ecoregion, which may be difficult to identify on a field scale. Current model simulations need further adjustments to the model input variables for the grass cover-dominated ecosystems in the ecoregion.

  19. HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.

    PubMed

    Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng

    2018-03-27

    LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .

  20. Crossover ensembles of random matrices and skew-orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in

    2011-08-15

    Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less

  1. Relations among several nuclear and electronic density functional reactivity indexes

    NASA Astrophysics Data System (ADS)

    Torrent-Sucarrat, Miquel; Luis, Josep M.; Duran, Miquel; Toro-Labbé, Alejandro; Solà, Miquel

    2003-11-01

    An expansion of the energy functional in terms of the total number of electrons and the normal coordinates within the canonical ensemble is presented. A comparison of this expansion with the expansion of the energy in terms of the total number of electrons and the external potential leads to new relations among common density functional reactivity descriptors. The formulas obtained provide explicit links between important quantities related to the chemical reactivity of a system. In particular, the relation between the nuclear and the electronic Fukui functions is recovered. The connection between the derivatives of the electronic energy and the nuclear repulsion energy with respect to the external potential offers a proof for the "Quantum Chemical le Chatelier Principle." Finally, the nuclear linear response function is defined and the relation of this function with the electronic linear response function is given.

  2. Thermodynamics of the adsorption of flexible polymers on nanowires

    DOE PAGES

    Vogel, Thomas; Gross, Jonathan; Bachmann, Michael

    2015-03-09

    Generalized-ensemble simulations enable the study of complex adsorption scenarios of a coarse-grained model polymer near an attractive nanostring, representing an ultrathin nanowire. We perform canonical and microcanonical statistical analyses to investigate structural transitions of the polymer and discuss their dependence on the temperature and on model parameters such as effective wire thickness and attraction strength. The result is a complete hyperphase diagram of the polymer phases, whose locations and stability are influenced by the effective material properties of the nanowire and the strength of the thermal fluctuations. Major structural polymer phases in the adsorbed state include compact droplets attached tomore » or wrapping around the wire, and tubelike conformations with triangular pattern that resemble ideal boron nanotubes. In conclusion, the classification of the transitions is performed by microcanonical inflection-point analysis.« less

  3. AdS charged black holes in Einstein-Yang-Mills gravity's rainbow: Thermal stability and P - V criticality

    NASA Astrophysics Data System (ADS)

    Hendi, Seyed Hossein; Momennia, Mehrab

    2018-02-01

    Motivated by the interesting non-abelian gauge field, in this paper, we look for the analytical solutions of Yang-Mills theory in the context of gravity's rainbow. Regarding the trace of quantum gravity in black hole thermodynamics, we examine the first law of thermodynamics and also thermal stability in the canonical ensemble. We show that although the rainbow functions and Yang-Mills charge modify the solutions, the first law of thermodynamics is still valid. Based on the phenomenological similarities between the adS black holes and van der Waals liquid/gas systems, we study the critical behavior of the Yang-Mills black holes in the extended phase space thermodynamics. We also investigate the effects of various parameters on thermal instability as well as critical properties by using appropriate figures.

  4. Black holes with halos

    NASA Astrophysics Data System (ADS)

    Monten, Ruben; Toldo, Chiara

    2018-02-01

    We present new AdS4 black hole solutions in N =2 gauged supergravity coupled to vector and hypermultiplets. We focus on a particular consistent truncation of M-theory on the homogeneous Sasaki–Einstein seven-manifold M 111, characterized by the presence of one Betti vector multiplet. We numerically construct static and spherically symmetric black holes with electric and magnetic charges, corresponding to M2 and M5 branes wrapping non-contractible cycles of the internal manifold. The novel feature characterizing these nonzero temperature configurations is the presence of a massive vector field halo. Moreover, we verify the first law of black hole mechanics and we study the thermodynamics in the canonical ensemble. We analyze the behavior of the massive vector field condensate across the small-large black hole phase transition and we interpret the process in the dual field theory.

  5. Entanglement branes in a two-dimensional string theory

    DOE PAGES

    Donnelly, William; Wong, Gabriel

    2017-09-20

    What is the meaning of entanglement in a theory of extended objects such as strings? To address this question we consider the spatial entanglement between two intervals in the Gross-Taylor model, the string theory dual to two-dimensional Yang-Mills theory at large N. The string diagrams that contribute to the entanglement entropy describe open strings with endpoints anchored to the entangling surface, as first argued by Susskind. We develop a canonical theory of these open strings, and describe how closed strings are divided into open strings at the level of the Hilbert space. Here, we derive the modular Hamiltonian for themore » Hartle-Hawking state and show that the corresponding reduced density matrix describes a thermal ensemble of open strings ending on an object at the entangling surface that we call an entanglement brane, or E-brane.« less

  6. Black hole thermodynamics in Lovelock gravity's rainbow with (A)dS asymptote

    NASA Astrophysics Data System (ADS)

    Hendi, Seyed Hossein; Dehghani, Ali; Faizal, Mir

    2017-01-01

    In this paper, we combine Lovelock gravity with gravity's rainbow to construct Lovelock gravity's rainbow. Considering the Lovelock gravity's rainbow coupled to linear and also nonlinear electromagnetic gauge fields, we present two new classes of topological black hole solutions. We compute conserved and thermodynamic quantities of these black holes (such as temperature, entropy, electric potential, charge and mass) and show that these quantities satisfy the first law of thermodynamics. In order to study the thermal stability in canonical ensemble, we calculate the heat capacity and determinant of the Hessian matrix and show in what regions there are thermally stable phases for black holes. Also, we discuss the dependence of thermodynamic behavior and thermal stability of black holes on rainbow functions. Finally, we investigate the critical behavior of black holes in the extended phase space and study their interesting properties.

  7. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; hide

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  8. Structure and dynamics of complex liquid water: Molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    S, Indrajith V.; Natesan, Baskaran

    2015-06-01

    We have carried out detailed structure and dynamical studies of complex liquid water using molecular dynamics simulations. Three different model potentials, namely, TIP3P, TIP4P and SPC-E have been used in the simulations, in order to arrive at the best possible potential function that could reproduce the structure of experimental bulk water. All the simulations were performed in the NVE micro canonical ensemble using LAMMPS. The radial distribution functions, gOO, gOH and gHH and the self diffusion coefficient, Ds, were calculated for all three models. We conclude from our results that the structure and dynamical parameters obtained for SPC-E model matched well with the experimental values, suggesting that among the models studied here, the SPC-E model gives the best structure and dynamics of bulk water.

  9. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    PubMed

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  10. New technologies for examining neuronal ensembles in drug addiction and fear

    PubMed Central

    Cruz, Fabio C.; Koya, Eisuke; Guez-Barber, Danielle H.; Bossert, Jennifer M.; Lupica, Carl R.; Shaham, Yavin; Hope, Bruce T.

    2015-01-01

    Correlational data suggest that learned associations are encoded within neuronal ensembles. However, it has been difficult to prove that neuronal ensembles mediate learned behaviours because traditional pharmacological and lesion methods, and even newer cell type-specific methods, affect both activated and non-activated neurons. Additionally, previous studies on synaptic and molecular alterations induced by learning did not distinguish between behaviourally activated and non-activated neurons. Here, we describe three new approaches—Daun02 inactivation, FACS sorting of activated neurons and c-fos-GFP transgenic rats — that have been used to selectively target and study activated neuronal ensembles in models of conditioned drug effects and relapse. We also describe two new tools — c-fos-tTA mice and inactivation of CREB-overexpressing neurons — that have been used to study the role of neuronal ensembles in conditioned fear. PMID:24088811

  11. Prediction of Weather Impacted Airport Capacity using Ensemble Learning

    NASA Technical Reports Server (NTRS)

    Wang, Yao Xun

    2011-01-01

    Ensemble learning with the Bagging Decision Tree (BDT) model was used to assess the impact of weather on airport capacities at selected high-demand airports in the United States. The ensemble bagging decision tree models were developed and validated using the Federal Aviation Administration (FAA) Aviation System Performance Metrics (ASPM) data and weather forecast at these airports. The study examines the performance of BDT, along with traditional single Support Vector Machines (SVM), for airport runway configuration selection and airport arrival rates (AAR) prediction during weather impacts. Testing of these models was accomplished using observed weather, weather forecast, and airport operation information at the chosen airports. The experimental results show that ensemble methods are more accurate than a single SVM classifier. The airport capacity ensemble method presented here can be used as a decision support model that supports air traffic flow management to meet the weather impacted airport capacity in order to reduce costs and increase safety.

  12. A deep learning-based multi-model ensemble method for cancer prediction.

    PubMed

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Application of Canonical Effective Methods to Background-Independent Theories

    NASA Astrophysics Data System (ADS)

    Buyukcam, Umut

    Effective formalisms play an important role in analyzing phenomena above some given length scale when complete theories are not accessible. In diverse exotic but physically important cases, the usual path-integral techniques used in a standard Quantum Field Theory approach seldom serve as adequate tools. This thesis exposes a new effective method for quantum systems, called the Canonical Effective Method, which owns particularly wide applicability in backgroundindependent theories as in the case of gravitational phenomena. The central purpose of this work is to employ these techniques to obtain semi-classical dynamics from canonical quantum gravity theories. Application to non-associative quantum mechanics is developed and testable results are obtained. Types of non-associative algebras relevant for magnetic-monopole systems are discussed. Possible modifications of hypersurface deformation algebra and the emergence of effective space-times are presented. iii.

  14. Minimalist ensemble algorithms for genome-wide protein localization prediction.

    PubMed

    Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun

    2012-07-03

    Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.

  15. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391

  16. A new approach to human microRNA target prediction using ensemble pruning and rotation forest.

    PubMed

    Mousavi, Reza; Eftekhari, Mahdi; Haghighi, Mehdi Ghezelbash

    2015-12-01

    MicroRNAs (miRNAs) are small non-coding RNAs that have important functions in gene regulation. Since finding miRNA target experimentally is costly and needs spending much time, the use of machine learning methods is a growing research area for miRNA target prediction. In this paper, a new approach is proposed by using two popular ensemble strategies, i.e. Ensemble Pruning and Rotation Forest (EP-RTF), to predict human miRNA target. For EP, the approach utilizes Genetic Algorithm (GA). In other words, a subset of classifiers from the heterogeneous ensemble is first selected by GA. Next, the selected classifiers are trained based on the RTF method and then are combined using weighted majority voting. In addition to seeking a better subset of classifiers, the parameter of RTF is also optimized by GA. Findings of the present study confirm that the newly developed EP-RTF outperforms (in terms of classification accuracy, sensitivity, and specificity) the previously applied methods over four datasets in the field of human miRNA target. Diversity-error diagrams reveal that the proposed ensemble approach constructs individual classifiers which are more accurate and usually diverse than the other ensemble approaches. Given these experimental results, we highly recommend EP-RTF for improving the performance of miRNA target prediction.

  17. Hierarchical Ensemble Methods for Protein Function Prediction

    PubMed Central

    2014-01-01

    Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954

  18. Stereodirectional Origin of anti-Arrhenius Kinetics for a Tetraatomic Hydrogen Exchange Reaction: Born-Oppenheimer Molecular Dynamics for OH + HBr.

    PubMed

    Coutinho, Nayara D; Aquilanti, Vincenzo; Silva, Valter H C; Camargo, Ademir J; Mundim, Kleber C; de Oliveira, Heibbe C B

    2016-07-14

    Among four-atom processes, the reaction OH + HBr → H2O + Br is one of the most studied experimentally: its kinetics has manifested an unusual anti-Arrhenius behavior, namely, a marked decrease of the rate constant as the temperature increases, which has intrigued theoreticians for a long time. Recently, salient features of the potential energy surface have been characterized and most kinetic aspects can be considered as satisfactorily reproduced by classical trajectory simulations. Motivation of the work reported in this paper is the investigation of the stereodirectional dynamics of this reaction as the prominent reason for the peculiar kinetics: we started in a previous Letter ( J. Phys. Chem. Lett. 2015 , 6 , 1553 - 1558 ) a first-principles Born-Oppenheimer "canonical" molecular dynamics approach. Trajectories are step-by-step generated on a potential energy surface quantum mechanically calculated on-the-fly and are thermostatically equilibrated to correspond to a specific temperature. Here, refinements of the method permitted a major increase of the number of trajectories and the consideration of four temperatures -50, +200, +350, and +500 K, for which the sampling of initial conditions allowed us to characterize the stereodynamical effect. The role is documented of the adjustment of the reactants' mutual orientation to encounter the entrance into the "cone of acceptance" for reactivity. The aperture angle of this cone is dictated by a range of directions of approach compatible with the formation of the specific HOH angle of the product water molecule; and consistently the adjustment is progressively less effective the higher the kinetic energy. Qualitatively, this emerging picture corroborates experiments on this reaction, involving collisions of aligned and oriented molecular beams, and covering a range of energies higher than the thermal ones. The extraction of thermal rate constants from this molecular dynamics approach is discussed and the systematic sampling of the canonical ensemble is indicated as needed for quantitative comparison with the kinetic experiments.

  19. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  20. Practical implementation of a particle filter data assimilation approach to estimate initial hydrologic conditions and initialize medium-range streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.

  1. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    NASA Astrophysics Data System (ADS)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  2. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  3. Residue-level global and local ensemble-ensemble comparisons of protein domains.

    PubMed

    Clark, Sarah A; Tronrud, Dale E; Karplus, P Andrew

    2015-09-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a "consistency check" of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. © 2015 The Protein Society.

  4. Residue-level global and local ensemble-ensemble comparisons of protein domains

    PubMed Central

    Clark, Sarah A; Tronrud, Dale E; Andrew Karplus, P

    2015-01-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a “consistency check” of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. PMID:26032515

  5. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  6. Drug-target interaction prediction using ensemble learning and dimensionality reduction.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-10-01

    Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2017-04-01

    Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  8. Canonical Drude Weight for Non-integrable Quantum Spin Chains

    NASA Astrophysics Data System (ADS)

    Mastropietro, Vieri; Porta, Marcello

    2018-03-01

    The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.

  9. Design of an Evolutionary Approach for Intrusion Detection

    PubMed Central

    2013-01-01

    A novel evolutionary approach is proposed for effective intrusion detection based on benchmark datasets. The proposed approach can generate a pool of noninferior individual solutions and ensemble solutions thereof. The generated ensembles can be used to detect the intrusions accurately. For intrusion detection problem, the proposed approach could consider conflicting objectives simultaneously like detection rate of each attack class, error rate, accuracy, diversity, and so forth. The proposed approach can generate a pool of noninferior solutions and ensembles thereof having optimized trade-offs values of multiple conflicting objectives. In this paper, a three-phase, approach is proposed to generate solutions to a simple chromosome design in the first phase. In the first phase, a Pareto front of noninferior individual solutions is approximated. In the second phase of the proposed approach, the entire solution set is further refined to determine effective ensemble solutions considering solution interaction. In this phase, another improved Pareto front of ensemble solutions over that of individual solutions is approximated. The ensemble solutions in improved Pareto front reported improved detection results based on benchmark datasets for intrusion detection. In the third phase, a combination method like majority voting method is used to fuse the predictions of individual solutions for determining prediction of ensemble solution. Benchmark datasets, namely, KDD cup 1999 and ISCX 2012 dataset, are used to demonstrate and validate the performance of the proposed approach for intrusion detection. The proposed approach can discover individual solutions and ensemble solutions thereof with a good support and a detection rate from benchmark datasets (in comparison with well-known ensemble methods like bagging and boosting). In addition, the proposed approach is a generalized classification approach that is applicable to the problem of any field having multiple conflicting objectives, and a dataset can be represented in the form of labelled instances in terms of its features. PMID:24376390

  10. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    PubMed

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  11. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.

    PubMed

    Nielsen, Allan Aasbjerg

    2002-01-01

    This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).

  12. Connecting the Brain to Itself through an Emulation

    PubMed Central

    Serruya, Mijail D.

    2017-01-01

    Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions. PMID:28713235

  13. Predicting protein function and other biomedical characteristics with heterogeneous ensembles

    PubMed Central

    Whalen, Sean; Pandey, Om Prakash

    2015-01-01

    Prediction problems in biomedical sciences, including protein function prediction (PFP), are generally quite difficult. This is due in part to incomplete knowledge of the cellular phenomenon of interest, the appropriateness and data quality of the variables and measurements used for prediction, as well as a lack of consensus regarding the ideal predictor for specific problems. In such scenarios, a powerful approach to improving prediction performance is to construct heterogeneous ensemble predictors that combine the output of diverse individual predictors that capture complementary aspects of the problems and/or datasets. In this paper, we demonstrate the potential of such heterogeneous ensembles, derived from stacking and ensemble selection methods, for addressing PFP and other similar biomedical prediction problems. Deeper analysis of these results shows that the superior predictive ability of these methods, especially stacking, can be attributed to their attention to the following aspects of the ensemble learning process: (i) better balance of diversity and performance, (ii) more effective calibration of outputs and (iii) more robust incorporation of additional base predictors. Finally, to make the effective application of heterogeneous ensembles to large complex datasets (big data) feasible, we present DataSink, a distributed ensemble learning framework, and demonstrate its sound scalability using the examined datasets. DataSink is publicly available from https://github.com/shwhalen/datasink. PMID:26342255

  14. Bayesian refinement of protein structures and ensembles against SAXS data using molecular dynamics

    PubMed Central

    Shevchuk, Roman; Hub, Jochen S.

    2017-01-01

    Small-angle X-ray scattering is an increasingly popular technique used to detect protein structures and ensembles in solution. However, the refinement of structures and ensembles against SAXS data is often ambiguous due to the low information content of SAXS data, unknown systematic errors, and unknown scattering contributions from the solvent. We offer a solution to such problems by combining Bayesian inference with all-atom molecular dynamics simulations and explicit-solvent SAXS calculations. The Bayesian formulation correctly weights the SAXS data versus prior physical knowledge, it quantifies the precision or ambiguity of fitted structures and ensembles, and it accounts for unknown systematic errors due to poor buffer matching. The method further provides a probabilistic criterion for identifying the number of states required to explain the SAXS data. The method is validated by refining ensembles of a periplasmic binding protein against calculated SAXS curves. Subsequently, we derive the solution ensembles of the eukaryotic chaperone heat shock protein 90 (Hsp90) against experimental SAXS data. We find that the SAXS data of the apo state of Hsp90 is compatible with a single wide-open conformation, whereas the SAXS data of Hsp90 bound to ATP or to an ATP-analogue strongly suggest heterogenous ensembles of a closed and a wide-open state. PMID:29045407

  15. A fuzzy integral method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification across multiple subjects.

    PubMed

    Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R

    2016-12-01

    The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.

  16. Ocean state and uncertainty forecasts using HYCOM with Local Ensemble Transfer Kalman Filter (LETKF)

    NASA Astrophysics Data System (ADS)

    Wei, Mozheng; Hogan, Pat; Rowley, Clark; Smedstad, Ole-Martin; Wallcraft, Alan; Penny, Steve

    2017-04-01

    An ensemble forecast system based on the US Navy's operational HYCOM using Local Ensemble Transfer Kalman Filter (LETKF) technology has been developed for ocean state and uncertainty forecasts. One of the advantages is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates the operational observations using ensemble method. The background covariance during this assimilation process is supplied with the ensemble, thus it avoids the difficulty of developing tangent linear and adjoint models for 4D-VAR from the complicated hybrid isopycnal vertical coordinate in HYCOM. Another advantage is that the ensemble system provides the valuable uncertainty estimate corresponding to every state forecast from HYCOM. Uncertainty forecasts have been proven to be critical for the downstream users and managers to make more scientifically sound decisions in numerical prediction community. In addition, ensemble mean is generally more accurate and skilful than the single traditional deterministic forecast with the same resolution. We will introduce the ensemble system design and setup, present some results from 30-member ensemble experiment, and discuss scientific, technical and computational issues and challenges, such as covariance localization, inflation, model related uncertainties and sensitivity to the ensemble size.

  17. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    PubMed

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  18. Reducing false positive incidental findings with ensemble genotyping and logistic regression-based variant filtering methods

    PubMed Central

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choi, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B.; Gupta, Neha; Kohane, Isaac S.; Green, Robert C.; Kong, Sek Won

    2014-01-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous SNVs; 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and ensemble genotyping would be essential to minimize false positive DNM candidates. PMID:24829188

  19. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  20. Cosolvent-Based Molecular Dynamics for Ensemble Docking: Practical Method for Generating Druggable Protein Conformations.

    PubMed

    Uehara, Shota; Tanaka, Shigenori

    2017-04-24

    Protein flexibility is a major hurdle in current structure-based virtual screening (VS). In spite of the recent advances in high-performance computing, protein-ligand docking methods still demand tremendous computational cost to take into account the full degree of protein flexibility. In this context, ensemble docking has proven its utility and efficiency for VS studies, but it still needs a rational and efficient method to select and/or generate multiple protein conformations. Molecular dynamics (MD) simulations are useful to produce distinct protein conformations without abundant experimental structures. In this study, we present a novel strategy that makes use of cosolvent-based molecular dynamics (CMD) simulations for ensemble docking. By mixing small organic molecules into a solvent, CMD can stimulate dynamic protein motions and induce partial conformational changes of binding pocket residues appropriate for the binding of diverse ligands. The present method has been applied to six diverse target proteins and assessed by VS experiments using many actives and decoys of DEKOIS 2.0. The simulation results have revealed that the CMD is beneficial for ensemble docking. Utilizing cosolvent simulation allows the generation of druggable protein conformations, improving the VS performance compared with the use of a single experimental structure or ensemble docking by standard MD with pure water as the solvent.

  1. Spectral statistics of the uni-modular ensemble

    NASA Astrophysics Data System (ADS)

    Joyner, Christopher H.; Smilansky, Uzy; Weidenmüller, Hans A.

    2017-09-01

    We investigate the spectral statistics of Hermitian matrices in which the elements are chosen uniformly from U(1) , called the uni-modular ensemble (UME), in the limit of large matrix size. Using three complimentary methods; a supersymmetric integration method, a combinatorial graph-theoretical analysis and a Brownian motion approach, we are able to derive expressions for 1 / N corrections to the mean spectral moments and also analyse the fluctuations about this mean. By addressing the same ensemble from three different point of view, we can critically compare their relative advantages and derive some new results.

  2. Curcumin and Natural Derivatives Inhibit Ebola Viral Proteins: An In silico Approach

    PubMed Central

    Baikerikar, Shruti

    2017-01-01

    Background: Ebola viral disease is a severe and mostly fatal disease in humans caused by Ebola virus. This virus belongs to family Filoviridae and is a single-stranded negative-sense virus. There is no single treatment for this disease which puts forth the need to identify new therapy to control and treat this fatal condition. Curcumin, one of the bioactives of turmeric, has proven antiviral property. Objective: The current study evaluates the inhibitory activity of curcumin, bisdemethoxycurcumin, demethoxycurcumin, and tetrahydrocurcumin against Zaire Ebola viral proteins (VPs). Materials and Methods: Molecular simulation of the Ebola VPs followed by docking studies with ligands comprising curcumin and related compounds was performed. Results: The highest binding activity for VP40 is −6.3 kcal/mol, VP35 is −8.3 kcal/mol, VP30 is −8.0 kcal/mol, VP24 is −7.7 kcal/mol, glycoprotein is −7.1 kcal/mol, and nucleoprotein is 6.8 kcal/mol. Conclusion: Bisdemethoxycurcumin shows better binding affinity than curcumin for most VPs. Metabolite tetrahydrocurcumin also shows binding affinity comparable to curcumin. These results indicate that curcumin, curcuminoids, and metabolite tetrahydrocurcumin can be potential lead compounds for developing a new therapy for Ebola viral disease. SUMMARY Curcumin, bisdemethoxycurcumin, and demethoxycurcumin are active constituents of turmeric. Tetrahydrocurcumin is the major metabolite of curcumin formed in the body after consumption and absorption of curcuminoidsCurcuminoids have proven antiviral activityBisdemethoxycurcumin showed maximum inhibition of Ebola viral proteins (VPs) among the curcuminoids in the docking procedure with a docking score as high as −8.3 kcal/molTetrahydrocurcumin showed inhibitory activity against Ebola VPs close to that of curcumin’s inhibitory action. Abbreviations Used: EBOV: Ebola virus, GP: Glycoprotein, NP: Nucleoprotein, NPT: Isothermal-isobaric Ensemble, amount of substance (N), pressure (P) and temperature (T) conserved, NVE: Canonical ensemble, amount of substance (N), volume (V) and temperature (T) conserved, VP: Viral protein. PMID:29333037

  3. Hamiltonian thermodynamics of charged three-dimensional dilatonic black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Goncalo A. S.; Lemos, Jose P. S.; Centro Multidisciplinar de Astrofisica-CENTRA, Departamento de Fisica, Instituto Superior Tecnico-IST, Universidade Tecnica de Lisboa-UTL, Avenida Rovisco Pais 1, 1049-001 Lisboa

    2008-10-15

    The action for a class of three-dimensional dilaton-gravity theories, with an electromagnetic Maxwell field and a cosmological constant, can be recast in a Brans-Dicke-Maxwell type action, with its free {omega} parameter. For a negative cosmological constant, these theories have static, electrically charged, and spherically symmetric black hole solutions. Those theories with well formulated asymptotics are studied through a Hamiltonian formalism, and their thermodynamical properties are found out. The theories studied are general relativity ({omega}{yields}{+-}{infinity}), a dimensionally reduced cylindrical four-dimensional general relativity theory ({omega}=0), and a theory representing a class of theories ({omega}=-3), all with a Maxwell term. The Hamiltonian formalismmore » is set up in three dimensions through foliations on the right region of the Carter-Penrose diagram, with the bifurcation 1-sphere as the left boundary, and anti-de Sitter infinity as the right boundary. The metric functions on the foliated hypersurfaces and the radial component of the vector potential one-form are the canonical coordinates. The Hamiltonian action is written, the Hamiltonian being a sum of constraints. One finds a new action which yields an unconstrained theory with two pairs of canonical coordinates (M,P{sub M};Q,P{sub Q}), where M is the mass parameter, which for {omega}<-(3/2) and for {omega}={+-}{infinity} needs a careful renormalization, P{sub M} is the conjugate momenta of M, Q is the charge parameter, and P{sub Q} is its conjugate momentum. The resulting Hamiltonian is a sum of boundary terms only. A quantization of the theory is performed. The Schroedinger evolution operator is constructed, the trace is taken, and the partition function of the grand canonical ensemble is obtained, where the chemical potential is the scalar electric field {phi}. Like the uncharged cases studied previously, the charged black hole entropies differ, in general, from the usual quarter of the horizon area due to the dilaton.« less

  4. Predictability of two types of El Niño and their climate impacts in boreal spring to summer in coupled models

    NASA Astrophysics Data System (ADS)

    Lee, Ray Wai-Ki; Tam, Chi-Yung; Sohn, Soo-Jin; Ahn, Joong-Bae

    2017-12-01

    The predictability of the two El Niño types and their different impacts on the East Asian climate from boreal spring to summer have been studied, based on coupled general circulation models (CGCM) simulations from the APEC Climate Center (APCC) multi-model ensemble (MME) hindcast experiments. It was found that both the spatial pattern and temporal persistence of canonical (eastern Pacific type) El Niño sea surface temperature (SST) are much better simulated than those for El Niño Modoki (central Pacific type). In particular, most models tend to have El Niño Modoki events that decay too quickly, in comparison to those observed. The ability of these models in distinguishing between the two types of ENSO has also been assessed. Based on the MME average, the two ENSO types become less and less differentiated in the model environment as the forecast leadtime increases. Regarding the climate impact of ENSO, in spring during canonical El Niño, coupled models can reasonably capture the anomalous low-level anticyclone over the western north Pacific (WNP)/Philippine Sea area, as well as rainfall over coastal East Asia. However, most models have difficulties in predicting the springtime dry signal over Indochina to South China Sea (SCS) when El Niño Modoki occurs. This is related to the location of the simulated anomalous anticyclone in this region, which is displaced eastward over SCS relative to the observed. In boreal summer, coupled models still exhibit some skills in predicting the East Asian rainfall during canonical El Nino, but not for El Niño Modoki. Overall, models' performance in spring to summer precipitation forecasts is dictated by their ability in capturing the low-level anticyclonic feature over the WNP/SCS area. The latter in turn is likely to be affected by the realism of the time mean monsoon circulation in models.

  5. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification

    PubMed Central

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911

  6. Ensemble pharmacophore meets ensemble docking: a novel screening strategy for the identification of RIPK1 inhibitors

    NASA Astrophysics Data System (ADS)

    Fayaz, S. M.; Rajanikant, G. K.

    2014-07-01

    Programmed cell death has been a fascinating area of research since it throws new challenges and questions in spite of the tremendous ongoing research in this field. Recently, necroptosis, a programmed form of necrotic cell death, has been implicated in many diseases including neurological disorders. Receptor interacting serine/threonine protein kinase 1 (RIPK1) is an important regulatory protein involved in the necroptosis and inhibition of this protein is essential to stop necroptotic process and eventually cell death. Current structure-based virtual screening methods involve a wide range of strategies and recently, considering the multiple protein structures for pharmacophore extraction has been emphasized as a way to improve the outcome. However, using the pharmacophoric information completely during docking is very important. Further, in such methods, using the appropriate protein structures for docking is desirable. If not, potential compound hits, obtained through pharmacophore-based screening, may not have correct ranks and scores after docking. Therefore, a comprehensive integration of different ensemble methods is essential, which may provide better virtual screening results. In this study, dual ensemble screening, a novel computational strategy was used to identify diverse and potent inhibitors against RIPK1. All the pharmacophore features present in the binding site were captured using both the apo and holo protein structures and an ensemble pharmacophore was built by combining these features. This ensemble pharmacophore was employed in pharmacophore-based screening of ZINC database. The compound hits, thus obtained, were subjected to ensemble docking. The leads acquired through docking were further validated through feature evaluation and molecular dynamics simulation.

  7. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  8. Hierarchy of N-point functions in the ΛCDM and ReBEL cosmologies

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Juszkiewicz, Roman; van de Weygaert, Rien

    2010-11-01

    In this work we investigate higher-order statistics for the ΛCDM and ReBEL scalar-interacting dark matter models by analyzing 180h-1Mpc dark matter N-body simulation ensembles. The N-point correlation functions and the related hierarchical amplitudes, such as skewness and kurtosis, are computed using the counts-in-cells method. Our studies demonstrate that the hierarchical amplitudes Sn of the scalar-interacting dark matter model significantly deviate from the values in the ΛCDM cosmology on scales comparable and smaller than the screening length rs of a given scalar-interacting model. The corresponding additional forces that enhance the total attractive force exerted on dark matter particles at galaxy scales lower the values of the hierarchical amplitudes Sn. We conclude that hypothetical additional exotic interactions in the dark matter sector should leave detectable markers in the higher-order correlation statistics of the density field. We focused in detail on the redshift evolution of the dark matter field’s skewness and kurtosis. From this investigation we find that the deviations from the canonical ΛCDM model introduced by the presence of the “fifth” force attain a maximum value at redshifts 0.5

  9. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    PubMed

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  10. Thermodynamics of new black hole solutions in the Einstein-Maxwell-dilaton gravity

    NASA Astrophysics Data System (ADS)

    Dehghani, M.

    In the present work, thermodynamics of the new black hole solutions to the four-dimensional Einstein-Maxwell-dilaton gravity theory have been studied. The dilaton potential, as the solution to the scalar field equations, has been constructed out by a linear combination of three Liouville-type potentials. Three new classes of charged dilatonic black hole solutions, as the exact solutions to the coupled equations of gravitational, electromagnetic and scalar fields, have been introduced. The conserved charge and mass of the new black holes have been calculated by utilizing Gauss's electric law and Abbott-Deser mass proposal, respectively. Also, the temperature, entropy and the electric potential of these new classes of charged dilatonic black holes have been calculated, making use of the geometrical approaches. Through a Smarr-type mass formula, the intensive parameters of the black holes have been calculated and validity of the first law of black hole thermodynamics has been confirmed. A thermal stability or phase transition analysis has been performed, making use of the canonical ensemble method. The heat capacity of the new black holes has been calculated and the points of type one- and type two-phase transitions as well as the ranges at which the new charged dilatonic black holes are locally stable have been determined, precisely.

  11. Free energy landscape of protein-like chains with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Movahed, Hanif Bayat; van Zon, Ramses; Schofield, Jeremy

    2012-06-01

    In this article the configurational space of two simple protein models consisting of polymers composed of a periodic sequence of four different kinds of monomers is studied as a function of temperature. In the protein models, hydrogen bond interactions, electrostatic repulsion, and covalent bond vibrations are modeled by discontinuous step, shoulder, and square-well potentials, respectively. The protein-like chains exhibit a secondary alpha helix structure in their folded states at low temperatures, and allow a natural definition of a configuration by considering which beads are bonded. Free energies and entropies of configurations are computed using the parallel tempering method in combination with hybrid Monte Carlo sampling of the canonical ensemble of the discontinuous potential system. The probability of observing the most common configuration is used to analyze the nature of the free energy landscape, and it is found that the model with the least number of possible bonds exhibits a funnel-like free energy landscape at low enough temperature for chains with fewer than 30 beads. For longer proteins, the free landscape consists of several minima, where the configuration with the lowest free energy changes significantly by lowering the temperature and the probability of observing the most common configuration never approaches one due to the degeneracy of the lowest accessible potential energy.

  12. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  13. Comprehensive modeling of microRNA targets predicts functional non-conserved and non-canonical sites.

    PubMed

    Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina

    2010-01-01

    mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.

  14. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  15. Study of high-performance canonical molecular orbitals calculation for proteins

    NASA Astrophysics Data System (ADS)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2017-11-01

    The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.

  16. Currency crisis indication by using ensembles of support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee

    2014-07-01

    There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.

  17. A variational ensemble scheme for noisy image data assimilation

    NASA Astrophysics Data System (ADS)

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-05-01

    Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb - )(Xb - )T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow-dependent background error covariance matrix that can be consistently adjusted to the background error. These nice advantages come however at the cost of a reduced rank modeling of the solution space. The B matrix is at most of rank N - 1 (N is the size of the ensemble) which is considerably lower than the dimension of state space. This rank deficiency may introduce spurious correlation errors, which particularly impact the quality of results associated with a high resolution computing grid. The common strategy to suppress these distant correlations for ensemble Kalman techniques is through localization procedures. In this paper we present key theoretical properties associated to different choices of methods involved in this setup and compare with an incremental 4DVar method experimentally the performances of several variations of an ensemble technique of interest. The comparisons have been led on the basis of a Shallow Water model and have been carried out both with synthetic data and real observations. We particularly addressed the potential pitfalls and advantages of the different methods. The results indicate an advantage in favor of the ensemble technique both in quality and computational cost when dealing with incomplete observations. We highlight as the premise of using ensemble variational assimilation, that the initial perturbation used to build the initial ensemble has to fit the physics of the observed phenomenon . We also apply the method to a stochastic shallow-water model which incorporate an uncertainty expression if the subgrid stress tensor related to the ensemble spread. References [1] A. C. Lorenc, The potential of the ensemble kalman filter for nwp - a comparison with 4d-var, Quart. J. Roy. Meteor. Soc., Vol. 129, pp. 3183-3203, 2003. [2] C. Liu, Q. Xiao, and B. Wang, An Ensemble-Based Four-Dimensional Variational Data Assimilation Scheme. Part I: Technical Formulation and Preliminary Test, Mon. Wea. Rev., Vol. 136(9), pp. 3363-3373, 2008. [3] M. Buehner, Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi- operational NWP setting, Quart. J. Roy. Meteor. Soc., Vol. 131(607), pp. 1013-1043, April 2005.

  18. Sampling-based ensemble segmentation against inter-operator variability

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew

    2011-03-01

    Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).

  19. Ensemble approach for differentiation of malignant melanoma

    NASA Astrophysics Data System (ADS)

    Rastgoo, Mojdeh; Morel, Olivier; Marzani, Franck; Garcia, Rafael

    2015-04-01

    Melanoma is the deadliest type of skin cancer, yet it is the most treatable kind depending on its early diagnosis. The early prognosis of melanoma is a challenging task for both clinicians and dermatologists. Due to the importance of early diagnosis and in order to assist the dermatologists, we propose an automated framework based on ensemble learning methods and dermoscopy images to differentiate melanoma from dysplastic and benign lesions. The evaluation of our framework on the recent and public dermoscopy benchmark (PH2 dataset) indicates the potential of proposed method. Our evaluation, using only global features, revealed that ensembles such as random forest perform better than single learner. Using random forest ensemble and combination of color and texture features, our framework achieved the highest sensitivity of 94% and specificity of 92%.

  20. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  1. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy

    PubMed Central

    Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing

    2016-01-01

    Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651

  2. Observational constraints on Tachyon and DBI inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sheng; Liddle, Andrew R., E-mail: sl277@sussex.ac.uk, E-mail: arl@roe.ac.uk

    2014-03-01

    We present a systematic method for evaluation of perturbation observables in non-canonical single-field inflation models within the slow-roll approximation, which allied with field redefinitions enables predictions to be established for a wide range of models. We use this to investigate various non-canonical inflation models, including Tachyon inflation and DBI inflation. The Lambert W function will be used extensively in our method for the evaluation of observables. In the Tachyon case, in the slow-roll approximation the model can be approximated by a canonical field with a redefined potential, which yields predictions in better agreement with observations than the canonical equivalents. Formore » DBI inflation models we consider contributions from both the scalar potential and the warp geometry. In the case of a quartic potential, we find a formula for the observables under both non-relativistic (sound speed c{sub s}{sup 2} ∼ 1) and relativistic behaviour (c{sub s}{sup 2} || 1) of the scalar DBI inflaton. For a quadratic potential we find two branches in the non-relativistic c{sub s}{sup 2} ∼ 1 case, determined by the competition of model parameters, while for the relativistic case c{sub s}{sup 2} → 0, we find consistency with results already in the literature. We present a comparison to the latest Planck satellite observations. Most of the non-canonical models we investigate, including the Tachyon, are better fits to data than canonical models with the same potential, but we find that DBI models in the slow-roll regime have difficulty in matching the data.« less

  3. Impact of state updating and multi-parametric ensemble for streamflow hindcasting in European river basins

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Rakovec, O.; Kumar, R.; Samaniego, L. E.

    2015-12-01

    Accurate and reliable streamflow prediction is essential to mitigate social and economic damage coming from water-related disasters such as flood and drought. Sequential data assimilation (DA) may facilitate improved streamflow prediction using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. However, if parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by model ensemble may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we evaluate impacts of streamflow data assimilation over European river basins. Especially, a multi-parametric ensemble approach is tested to consider the effects of parametric uncertainty in DA. Because augmentation of parameters is not required within an assimilation window, the approach could be more stable with limited ensemble members and have potential for operational uses. To consider the response times and non-Gaussian characteristics of internal hydrologic processes, lagged particle filtering is utilized. The presentation will be focused on gains and limitations of streamflow data assimilation and multi-parametric ensemble method over large-scale basins.

  4. Uncertainty Quantification in Alchemical Free Energy Methods.

    PubMed

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-06-12

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  5. "Play It Again, Billy, but This Time with More Mistakes": Divergent Improvisation Activities for the Jazz Ensemble

    ERIC Educational Resources Information Center

    Healy, Daniel J.

    2014-01-01

    The jazz ensemble represents an important performance opportunity in many school music programs. Due to the cultural history of jazz as an improvisatory art form, school jazz ensemble directors must address methods of teaching improvisation concepts to young students. Progress has been made in the field of prescribed improvisation activities and…

  6. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  7. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  8. Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank

    2018-03-01

    We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.

  9. Thermal stability of black holes with arbitrary hairs

    NASA Astrophysics Data System (ADS)

    Sinha, Aloke Kumar

    2018-02-01

    We have derived the criteria for thermal stability of charged rotating black holes, for horizon areas that are large relative to the Planck area (in these dimensions). In this paper, we generalized it for black holes with arbitrary hairs. The derivation uses results of loop quantum gravity and equilibrium statistical mechanics of the grand canonical ensemble and there is no explicit use of classical spacetime geometry at all in this analysis. The assumption is that the mass of the black hole is a function of its horizon area and all the hairs. Our stability criteria are then tested in detail against some specific black holes, whose metrics provide us with explicit relations for the dependence of the mass on the area and other hairs of the black holes. This enables us to predict which of these black holes are expected to be thermally unstable under Hawking radiation.

  10. On the thermodynamics of the black hole and hairy black hole transitions in the asymptotically flat spacetime with a box

    NASA Astrophysics Data System (ADS)

    Peng, Yan; Wang, Bin; Liu, Yunqi

    2018-03-01

    We study the asymptotically flat quasi-local black hole/hairy black hole model with nonzero mass of the scalar field. We disclose effects of the scalar mass on transitions in a grand canonical ensemble with condensation behaviors of the parameter ψ 2, which is similar to approaches in holographic theories. We find that a more negative scalar mass makes the phase transition easier. We also obtain the analytical relation ψ 2∝ (Tc-T)^{1/2} around the critical phase transition points, implying a second order phase transition. Besides the parameter ψ 2, we show that metric solutions can be used to disclose properties of the transitions. In this work, we observe that phase transitions in a box are strikingly similar to holographic transitions in AdS gravity and the similarity provides insights into holographic theories.

  11. Thermal stability of charged rotating quantum black holes

    NASA Astrophysics Data System (ADS)

    Sinha, Aloke Kumar; Majumdar, Parthasarathi

    2017-12-01

    Criteria for thermal stability of charged rotating black holes of any dimension are derived for horizon areas that are large relative to the Planck area (in these dimensions). The derivation is based on generic assumptions of quantum geometry, supported by some results of loop quantum gravity, and equilibrium statistical mechanics of the Grand Canonical ensemble. There is no explicit use of classical spacetime geometry in this analysis. The only assumption is that the mass of the black hole is a function of its horizon area, charge and angular momentum. Our stability criteria are then tested in detail against specific classical black holes in spacetime dimensions 4 and 5, whose metrics provide us with explicit relations for the dependence of the mass on the charge and angular momentum of the black holes. This enables us to predict which of these black holes are expected to be thermally unstable under Hawking radiation.

  12. First principles view on chemical compound space: Gaining rigorous atomistic control of molecular properties

    DOE PAGES

    von Lilienfeld, O. Anatole

    2013-02-26

    A well-defined notion of chemical compound space (CCS) is essential for gaining rigorous control of properties through variation of elemental composition and atomic configurations. Here, we give an introduction to an atomistic first principles perspective on CCS. First, CCS is discussed in terms of variational nuclear charges in the context of conceptual density functional and molecular grand-canonical ensemble theory. Thereafter, we revisit the notion of compound pairs, related to each other via “alchemical” interpolations involving fractional nuclear charges in the electronic Hamiltonian. We address Taylor expansions in CCS, property nonlinearity, improved predictions using reference compound pairs, and the ounce-of-gold prizemore » challenge to linearize CCS. Finally, we turn to machine learning of analytical structure property relationships in CCS. Here, these relationships correspond to inferred, rather than derived through variational principle, solutions of the electronic Schrödinger equation.« less

  13. Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Gavilán Arriazu, E. M.; Pinto, O. A.

    2018-03-01

    We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.

  14. Fluctuation Pressure Assisted Ejection of DNA From Bacteriophage

    NASA Astrophysics Data System (ADS)

    Harrison, Michael J.

    2011-03-01

    The role of thermal pressure fluctuations excited within tightly packaged DNA while it is ejected from protein capsid shells is discussed in a model calculation. At equilibrium before ejection we assume the DNA is folded many times into a bundle of parallel segments that forms an equilibrium conformation at minimum free energy, which presses tightly against capsid walls. Using a canonical ensemble at temperature T we calculate internal pressure fluctuations against a slowly moving or static capsid mantle for an elastic continuum model of the folded DNA bundle. It is found that fluctuating pressures on the capsid from thermal excitation of longitudinal acoustic vibrations in the bundle whose wavelengths are exceeded by the bend persistence length may have root-mean-square values that are several tens of atmospheres for typically small phage dimensions. Comparisons are given with measured data on three mutants of lambda phage with different base pair lengths and total genome ejection pressures.

  15. Thermodynamics and glassy phase transition of regular black holes

    NASA Astrophysics Data System (ADS)

    Javed, Wajiha; Yousaf, Z.; Akhtar, Zunaira

    2018-05-01

    This paper is aimed to study thermodynamical properties of phase transition for regular charged black holes (BHs). In this context, we have considered two different forms of BH metrics supplemented with exponential and logistic distribution functions and investigated the recent expansion of phase transition through grand canonical ensemble. After exploring the corresponding Ehrenfest’s equation, we found the second-order background of phase transition at critical points. In order to check the critical behavior of regular BHs, we have evaluated some corresponding explicit relations for the critical temperature, pressure and volume and draw certain graphs with constant values of Smarr’s mass. We found that for the BH metric with exponential configuration function, the phase transition curves are divergent near the critical points, while glassy phase transition has been observed for the Ayón-Beato-García-Bronnikov (ABGB) BH in n = 5 dimensions.

  16. Physisorption and desorption of H2, HD and D2 on amorphous solid water ice. Effect on mixing isotopologue on statistical population of adsorption sites.

    PubMed

    Amiaud, Lionel; Fillion, Jean-Hugues; Dulieu, François; Momeni, Anouchah; Lemaire, Jean-Louis

    2015-11-28

    We study the adsorption and desorption of three isotopologues of molecular hydrogen mixed on 10 ML of porous amorphous water ice (ASW) deposited at 10 K. Thermally programmed desorption (TPD) of H2, D2 and HD adsorbed at 10 K have been performed with different mixings. Various coverages of H2, HD and D2 have been explored and a model taking into account all species adsorbed on the surface is presented in detail. The model we propose allows to extract the parameters required to fully reproduce the desorption of H2, HD and D2 for various coverages and mixtures in the sub-monolayer regime. The model is based on a statistical description of the process in a grand-canonical ensemble where adsorbed molecules are described following a Fermi-Dirac distribution.

  17. Monte Carlo simulation of biomolecular systems with BIOMCSIM

    NASA Astrophysics Data System (ADS)

    Kamberaj, H.; Helms, V.

    2001-12-01

    A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.

  18. Modeling the Thermoelectric Properties of Ti5O9 Magnéli Phase Ceramics

    NASA Astrophysics Data System (ADS)

    Pandey, Sudeep J.; Joshi, Giri; Wang, Shidong; Curtarolo, Stefano; Gaume, Romain M.

    2016-11-01

    Magnéli phase Ti5O9 ceramics with 200-nm grain-size were fabricated by hot-pressing nanopowders of titanium and anatase TiO2 at 1223 K. The thermoelectric properties of these ceramics were investigated from room temperature to 1076 K. We show that the experimental variation of the electrical conductivity with temperature follows a non-adiabatic small-polaron model with an activation energy of 64 meV. In this paper, we propose a modified Heikes-Chaikin-Beni model, based on a canonical ensemble of closely spaced titanium t 2g levels, to account for the temperature dependency of the Seebeck coefficient. Modeling of the thermal conductivity data reveals that the phonon contribution remains constant throughout the investigated temperature range. The thermoelectric figure-of-merit ZT of this nanoceramic material reaches 0.3 K at 1076 K.

  19. Entropy of the Bose-Einstein-condensate ground state: Correlation versus ground-state entropy

    NASA Astrophysics Data System (ADS)

    Kim, Moochan B.; Svidzinsky, Anatoly; Agarwal, Girish S.; Scully, Marlan O.

    2018-01-01

    Calculation of the entropy of an ideal Bose-Einstein condensate (BEC) in a three-dimensional trap reveals unusual, previously unrecognized, features of the canonical ensemble. It is found that, for any temperature, the entropy of the Bose gas is equal to the entropy of the excited particles although the entropy of the particles in the ground state is nonzero. We explain this by considering the correlations between the ground-state particles and particles in the excited states. These correlations lead to a correlation entropy which is exactly equal to the contribution from the ground state. The correlations themselves arise from the fact that we have a fixed number of particles obeying quantum statistics. We present results for correlation functions between the ground and excited states in a Bose gas, so as to clarify the role of fluctuations in the system. We also report the sub-Poissonian nature of the ground-state fluctuations.

  20. From a structural average to the conformational ensemble of a DNA bulge

    PubMed Central

    Shi, Xuesong; Beauchamp, Kyle A.; Harbury, Pehr B.; Herschlag, Daniel

    2014-01-01

    Direct experimental measurements of conformational ensembles are critical for understanding macromolecular function, but traditional biophysical methods do not directly report the solution ensemble of a macromolecule. Small-angle X-ray scattering interferometry has the potential to overcome this limitation by providing the instantaneous distance distribution between pairs of gold-nanocrystal probes conjugated to a macromolecule in solution. Our X-ray interferometry experiments reveal an increasing bend angle of DNA duplexes with bulges of one, three, and five adenosine residues, consistent with previous FRET measurements, and further reveal an increasingly broad conformational ensemble with increasing bulge length. The distance distributions for the AAA bulge duplex (3A-DNA) with six different Au-Au pairs provide strong evidence against a simple elastic model in which fluctuations occur about a single conformational state. Instead, the measured distance distributions suggest a 3A-DNA ensemble with multiple conformational states predominantly across a region of conformational space with bend angles between 24 and 85 degrees and characteristic bend directions and helical twists and displacements. Additional X-ray interferometry experiments revealed perturbations to the ensemble from changes in ionic conditions and the bulge sequence, effects that can be understood in terms of electrostatic and stacking contributions to the ensemble and that demonstrate the sensitivity of X-ray interferometry. Combining X-ray interferometry ensemble data with molecular dynamics simulations gave atomic-level models of representative conformational states and of the molecular interactions that may shape the ensemble, and fluorescence measurements with 2-aminopurine-substituted 3A-DNA provided initial tests of these atomistic models. More generally, X-ray interferometry will provide powerful benchmarks for testing and developing computational methods. PMID:24706812

Top