info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.
Defrance, Matthieu; van Helden, Jacques
2009-10-15
Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs
PhyloGibbs-MP: Module Prediction and Discriminative Motif-Finding by Gibbs Sampling
Siddharthan, Rahul
2008-01-01
PhyloGibbs, our recent Gibbs-sampling motif-finder, takes phylogeny into account in detecting binding sites for transcription factors in DNA and assigns posterior probabilities to its predictions obtained by sampling the entire configuration space. Here, in an extension called PhyloGibbs-MP, we widen the scope of the program, addressing two major problems in computational regulatory genomics. First, PhyloGibbs-MP can localise predictions to small, undetermined regions of a large input sequence, thus effectively predicting cis-regulatory modules (CRMs) ab initio while simultaneously predicting binding sites in those modules—tasks that are usually done by two separate programs. PhyloGibbs-MP's performance at such ab initio CRM prediction is comparable with or superior to dedicated module-prediction software that use prior knowledge of previously characterised transcription factors. Second, PhyloGibbs-MP can predict motifs that differentiate between two (or more) different groups of regulatory regions, that is, motifs that occur preferentially in one group over the others. While other “discriminative motif-finders” have been published in the literature, PhyloGibbs-MP's implementation has some unique features and flexibility. Benchmarks on synthetic and actual genomic data show that this algorithm is successful at enhancing predictions of differentiating sites and suppressing predictions of common sites and compares with or outperforms other discriminative motif-finders on actual genomic data. Additional enhancements include significant performance and speed improvements, the ability to use “informative priors” on known transcription factors, and the ability to output annotations in a format that can be visualised with the Generic Genome Browser. In stand-alone motif-finding, PhyloGibbs-MP remains competitive, outperforming PhyloGibbs-1.0 and other programs on benchmark data. PMID:18769735
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
Boosting association rule mining in large datasets via Gibbs sampling.
Qian, Guoqi; Rao, Calyampudi Radhakrishna; Sun, Xiaoying; Wu, Yuehua
2016-05-03
Current algorithms for association rule mining from transaction data are mostly deterministic and enumerative. They can be computationally intractable even for mining a dataset containing just a few hundred transaction items, if no action is taken to constrain the search space. In this paper, we develop a Gibbs-sampling-induced stochastic search procedure to randomly sample association rules from the itemset space, and perform rule mining from the reduced transaction dataset generated by the sample. Also a general rule importance measure is proposed to direct the stochastic search so that, as a result of the randomly generated association rules constituting an ergodic Markov chain, the overall most important rules in the itemset space can be uncovered from the reduced dataset with probability 1 in the limit. In the simulation study and a real genomic data example, we show how to boost association rule mining by an integrated use of the stochastic search and the Apriori algorithm.
Gibbs Ensembles for Nearly Compatible and Incompatible Conditional Models
Chen, Shyh-Huei; Wang, Yuchung J.
2010-01-01
Gibbs sampler has been used exclusively for compatible conditionals that converge to a unique invariant joint distribution. However, conditional models are not always compatible. In this paper, a Gibbs sampling-based approach — Gibbs ensemble —is proposed to search for a joint distribution that deviates least from a prescribed set of conditional distributions. The algorithm can be easily scalable such that it can handle large data sets of high dimensionality. Using simulated data, we show that the proposed approach provides joint distributions that are less discrepant from the incompatible conditionals than those obtained by other methods discussed in the literature. The ensemble approach is also applied to a data set regarding geno-polymorphism and response to chemotherapy in patients with metastatic colorectal PMID:21286232
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
A Bayesian Nonparametric Approach to Image Super-Resolution.
Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid
2015-02-01
Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms
Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159
Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach.
Andreatta, Massimo; Lund, Ole; Nielsen, Morten
2013-01-01
Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides. The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities in peptide data by performing two essential tasks simultaneously: alignment and clustering of peptide data. We apply the method to de-convolute binding motifs in a panel of peptide datasets with different degrees of complexity spanning from the simplest case of pre-aligned fixed-length peptides to cases of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule. The Gibbs clustering method is available online as a web server at http://www.cbs.dtu.dk/services/GibbsCluster.
Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.
2006-01-01
A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
ERIC Educational Resources Information Center
Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.
2018-01-01
Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Hierarchical Dirichlet process model for gene expression clustering
2013-01-01
Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach.
Nielsen, Morten; Lundegaard, Claus; Worning, Peder; Hvid, Christina Sylvester; Lamberth, Kasper; Buus, Søren; Brunak, Søren; Lund, Ole
2004-06-12
Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates novel features optimized for the task of recognizing the binding motif of MHC classes I and II. The method locates the binding motif in a set of sequences and characterizes the motif in terms of a weight-matrix. Subsequently, the weight-matrix can be applied to identifying effectively potential MHC binding peptides and to guiding the process of rational vaccine design. We apply the motif sampler method to the complex problem of MHC class II binding. The input to the method is amino acid peptide sequences extracted from the public databases of SYFPEITHI and MHCPEP and known to bind to the MHC class II complex HLA-DR4(B1*0401). Prior identification of information-rich (anchor) positions in the binding motif is shown to improve the predictive performance of the Gibbs sampler. Similarly, a consensus solution obtained from an ensemble average over suboptimal solutions is shown to outperform the use of a single optimal solution. In a large-scale benchmark calculation, the performance is quantified using relative operating characteristics curve (ROC) plots and we make a detailed comparison of the performance with that of both the TEPITOPE method and a weight-matrix derived using the conventional alignment algorithm of ClustalW. The calculation demonstrates that the predictive performance of the Gibbs sampler is higher than that of ClustalW and in most cases also higher than that of the TEPITOPE method.
Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2016-01-01
Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. PMID:27279724
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Efficient algorithms for polyploid haplotype phasing.
He, Dan; Saha, Subrata; Finkers, Richard; Parida, Laxmi
2018-05-09
Inference of haplotypes, or the sequence of alleles along the same chromosomes, is a fundamental problem in genetics and is a key component for many analyses including admixture mapping, identifying regions of identity by descent and imputation. Haplotype phasing based on sequencing reads has attracted lots of attentions. Diploid haplotype phasing where the two haplotypes are complimentary have been studied extensively. In this work, we focused on Polyploid haplotype phasing where we aim to phase more than two haplotypes at the same time from sequencing data. The problem is much more complicated as the search space becomes much larger and the haplotypes do not need to be complimentary any more. We proposed two algorithms, (1) Poly-Harsh, a Gibbs Sampling based algorithm which alternatively samples haplotypes and the read assignments to minimize the mismatches between the reads and the phased haplotypes, (2) An efficient algorithm to concatenate haplotype blocks into contiguous haplotypes. Our experiments showed that our method is able to improve the quality of the phased haplotypes over the state-of-the-art methods. To our knowledge, our algorithm for haplotype blocks concatenation is the first algorithm that leverages the shared information across multiple individuals to construct contiguous haplotypes. Our experiments showed that it is both efficient and effective.
Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum
2006-01-01
A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…
Scanning sequences after Gibbs sampling to find multiple occurrences of functional elements
Tharakaraman, Kannan; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L; Landsman, David; Spouge, John L
2006-01-01
Background Many DNA regulatory elements occur as multiple instances within a target promoter. Gibbs sampling programs for finding DNA regulatory elements de novo can be prohibitively slow in locating all instances of such an element in a sequence set. Results We describe an improvement to the A-GLAM computer program, which predicts regulatory elements within DNA sequences with Gibbs sampling. The improvement adds an optional "scanning step" after Gibbs sampling. Gibbs sampling produces a position specific scoring matrix (PSSM). The new scanning step resembles an iterative PSI-BLAST search based on the PSSM. First, it assigns an "individual score" to each subsequence of appropriate length within the input sequences using the initial PSSM. Second, it computes an E-value from each individual score, to assess the agreement between the corresponding subsequence and the PSSM. Third, it permits subsequences with E-values falling below a threshold to contribute to the underlying PSSM, which is then updated using the Bayesian calculus. A-GLAM iterates its scanning step to convergence, at which point no new subsequences contribute to the PSSM. After convergence, A-GLAM reports predicted regulatory elements within each sequence in order of increasing E-values, so users have a statistical evaluation of the predicted elements in a convenient presentation. Thus, although the Gibbs sampling step in A-GLAM finds at most one regulatory element per input sequence, the scanning step can now rapidly locate further instances of the element in each sequence. Conclusion Datasets from experiments determining the binding sites of transcription factors were used to evaluate the improvement to A-GLAM. Typically, the datasets included several sequences containing multiple instances of a regulatory motif. The improvements to A-GLAM permitted it to predict the multiple instances. PMID:16961919
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Bayesian Estimation of the DINA Model with Gibbs Sampling
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Representing and computing regular languages on massively parallel networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.I.; O'Sullivan, J.A.; Boysam, B.
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh
2008-04-01
This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
DNA motif alignment by evolving a population of Markov chains.
Bi, Chengpeng
2009-01-30
Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
A Gibbs sampler for motif detection in phylogenetically close sequences
NASA Astrophysics Data System (ADS)
Siddharthan, Rahul; van Nimwegen, Erik; Siggia, Eric
2004-03-01
Genes are regulated by transcription factors that bind to DNA upstream of genes and recognize short conserved ``motifs'' in a random intergenic ``background''. Motif-finders such as the Gibbs sampler compare the probability of these short sequences being represented by ``weight matrices'' to the probability of their arising from the background ``null model'', and explore this space (analogous to a free-energy landscape). But closely related species may show conservation not because of functional sites but simply because they have not had sufficient time to diverge, so conventional methods will fail. We introduce a new Gibbs sampler algorithm that accounts for common ancestry when searching for motifs, while requiring minimal ``prior'' assumptions on the number and types of motifs, assessing the significance of detected motifs by ``tracking'' clusters that stay together. We apply this scheme to motif detection in sporulation-cycle genes in the yeast S. cerevisiae, using recent sequences of other closely-related Saccharomyces species.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
NASA Astrophysics Data System (ADS)
Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.
2018-04-01
The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.
Gibbs motif sampling: detection of bacterial outer membrane protein repeats.
Neuwald, A. F.; Liu, J. S.; Lawrence, C. E.
1995-01-01
The detection and alignment of locally conserved regions (motifs) in multiple sequences can provide insight into protein structure, function, and evolution. A new Gibbs sampling algorithm is described that detects motif-encoding regions in sequences and optimally partitions them into distinct motif models; this is illustrated using a set of immunoglobulin fold proteins. When applied to sequences sharing a single motif, the sampler can be used to classify motif regions into related submodels, as is illustrated using helix-turn-helix DNA-binding proteins. Other statistically based procedures are described for searching a database for sequences matching motifs found by the sampler. When applied to a set of 32 very distantly related bacterial integral outer membrane proteins, the sampler revealed that they share a subtle, repetitive motif. Although BLAST (Altschul SF et al., 1990, J Mol Biol 215:403-410) fails to detect significant pairwise similarity between any of the sequences, the repeats present in these outer membrane proteins, taken as a whole, are highly significant (based on a generally applicable statistical test for motifs described here). Analysis of bacterial porins with known trimeric beta-barrel structure and related proteins reveals a similar repetitive motif corresponding to alternating membrane-spanning beta-strands. These beta-strands occur on the membrane interface (as opposed to the trimeric interface) of the beta-barrel. The broad conservation and structural location of these repeats suggests that they play important functional roles. PMID:8520488
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-12-31
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
ERIC Educational Resources Information Center
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Chen, Ming-Hui; Zeng, Donglin; Hu, Kuolung; Jia, Catherine
2014-01-01
Summary In many biomedical studies, patients may experience the same type of recurrent event repeatedly over time, such as bleeding, multiple infections and disease. In this article, we propose a Bayesian design to a pivotal clinical trial in which lower risk myelodysplastic syndromes (MDS) patients are treated with MDS disease modifying therapies. One of the key study objectives is to demonstrate the investigational product (treatment) effect on reduction of platelet transfusion and bleeding events while receiving MDS therapies. In this context, we propose a new Bayesian approach for the design of superiority clinical trials using recurrent events frailty regression models. Historical recurrent events data from an already completed phase 2 trial are incorporated into the Bayesian design via the partial borrowing power prior of Ibrahim et al. (2012, Biometrics 68, 578–586). An efficient Gibbs sampling algorithm, a predictive data generation algorithm, and a simulation-based algorithm are developed for sampling from the fitting posterior distribution, generating the predictive recurrent events data, and computing various design quantities such as the type I error rate and power, respectively. An extensive simulation study is conducted to compare the proposed method to the existing frequentist methods and to investigate various operating characteristics of the proposed design. PMID:25041037
KIRMES: kernel-based identification of regulatory modules in euchromatic sequences.
Schultheiss, Sebastian J; Busch, Wolfgang; Lohmann, Jan U; Kohlbacher, Oliver; Rätsch, Gunnar
2009-08-15
Understanding transcriptional regulation is one of the main challenges in computational biology. An important problem is the identification of transcription factor (TF) binding sites in promoter regions of potential TF target genes. It is typically approached by position weight matrix-based motif identification algorithms using Gibbs sampling, or heuristics to extend seed oligos. Such algorithms succeed in identifying single, relatively well-conserved binding sites, but tend to fail when it comes to the identification of combinations of several degenerate binding sites, as those often found in cis-regulatory modules. We propose a new algorithm that combines the benefits of existing motif finding with the ones of support vector machines (SVMs) to find degenerate motifs in order to improve the modeling of regulatory modules. In experiments on microarray data from Arabidopsis thaliana, we were able to show that the newly developed strategy significantly improves the recognition of TF targets. The python source code (open source-licensed under GPL), the data for the experiments and a Galaxy-based web service are available at http://www.fml.mpg.de/raetsch/suppl/kirmes/.
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
Algorithm For Hypersonic Flow In Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.
Learning topic models by belief propagation.
Zeng, Jia; Cheung, William K; Liu, Jiming
2013-05-01
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.
Non-proportional odds multivariate logistic regression of ordinal family data.
Zaloumis, Sophie G; Scurrah, Katrina J; Harrap, Stephen B; Ellis, Justine A; Gurrin, Lyle C
2015-03-01
Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non-proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non-proportional odds multivariate logistic regression model and take a simulation-based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Markov chain sampling of the O(n) loop models on the infinite plane
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-07-01
A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.
Bayesian Analysis of the Power Spectrum of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; O'Dwyer, I. J.; Wandelt, B. D.
2005-01-01
There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background. The sky, when viewed in the microwave, is very uniform, with a nearly perfect blackbody spectrum at 2.7 degrees. Very small amplitude brightness fluctuations (to one part in a million!!) trace small density perturbations in the early universe (roughly 300,000 years after the Big Bang), which later grow through gravitational instability to the large-scale structure seen in redshift surveys... In this talk, I will discuss a Bayesian formulation of this problem; discuss a Gibbs sampling approach to numerically sampling from the Bayesian posterior, and the application of this approach to the first-year data from the Wilkinson Microwave Anisotropy Probe. I will also comment on recent algorithmic developments for this approach to be tractable for the even more massive data set to be returned from the Planck satellite.
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Hemingway, B.S.; Robie, R.A.; Kittrick, J.A.
1978-01-01
Solution calorimetric measurements compared with solubility determinations from the literature for the same samples of gibbsite have provided a direct thermochemical cycle through which the Gibbs free energy of formation of [Al(OH)4 aq-] can be determined. The Gibbs free energy of formation of [Al(OH)4 aq-] at 298.15 K is -1305 ?? 1 kJ/mol. These heat-of-solution results show no significant difference in the thermodynamic properties of gibbsite particles in the range from 50 to 0.05 ??m. The Gibbs free energies of formation at 298.15 K and 1 bar pressure of diaspore, boehmite and bayerite are -9210 ?? 5.0, -918.4 ?? 2.1 and -1153 ?? 2 kJ/mol based upon the Gibbs free energy of [A1(OH)4 aq-] calculated in this paper and the acceptance of -1582.2 ?? 1.3 and -1154.9 ?? 1.2 kJ/mol for the Gibbs free energy of formation of corundum and gibbsite, respectively. Values for the Gibbs free energy formation of [Al(OH)2 aq+] and [AlO2 aq-] were also calculated as -914.2 ?? 2.1 and -830.9 ?? 2.1 kJ/mol, respectively. The use of [AlC2 aq-] as a chemical species is discouraged. A revised Gibbs free energy of formation for [H4SiO4aq0] was recalculated from calorimetric data yielding a value of -1307.5 ?? 1.7 kJ/mol which is in good agreement with the results obtained from several solubility studies. Smoothed values for the thermodynamic functions CP0, ( HT0 - H2980) T, ( GT0 - H2980) T, ST0 - S00, ??Hf{hook},2980 kaolinite are listed at integral temperatures between 298.15 and 800 K. The heat capacity of kaolinite at temperatures between 250 and 800 K may be calculated from the following equation: CP0 = 1430.26 - 0.78850 T + 3.0340 ?? 10-4 T2 -1.85158 ?? 10-4 T2 1 2 + 8.3341 ?? 106 T-2. The thermodynamic properties of most of the geologically important Al-bearing phases have been referenced to the same reference state for Al, namely gibbsite. ?? 1978.
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
Bayesian Analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey
2007-01-01
There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background! Experiments designed to map the microwave sky are returning a flood of data (time streams of instrument response as a beam is swept over the sky) at several different frequencies (from 30 to 900 GHz), all with different resolutions and noise properties. The resulting analysis challenge is to estimate, and quantify our uncertainty in, the spatial power spectrum of the cosmic microwave background given the complexities of "missing data", foreground emission, and complicated instrumental noise. Bayesian formulation of this problem allows consistent treatment of many complexities including complicated instrumental noise and foregrounds, and can be numerically implemented with Gibbs sampling. Gibbs sampling has now been validated as an efficient, statistically exact, and practically useful method for low-resolution (as demonstrated on WMAP 1 and 3 year temperature and polarization data). Continuing development for Planck - the goal is to exploit the unique capabilities of Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models
NASA Astrophysics Data System (ADS)
Giordano, Stefano
2017-12-01
Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.
On-line Gibbs learning. II. Application to perceptron and multilayer networks
NASA Astrophysics Data System (ADS)
Kim, J. W.; Sompolinsky, H.
1998-08-01
In the preceding paper (``On-line Gibbs Learning. I. General Theory'') we have presented the on-line Gibbs algorithm (OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to on-line supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a winner-takes-all (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in paper I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power law that is the same as that of batch learning.
NASA Astrophysics Data System (ADS)
Oware, E. K.
2017-12-01
Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.
NASA Astrophysics Data System (ADS)
Korayem, M. H.; Shafei, A. M.
2013-02-01
The goal of this paper is to describe the application of Gibbs-Appell (G-A) formulation and the assumed modes method to the mathematical modeling of N-viscoelastic link manipulators. The paper's focus is on obtaining accurate and complete equations of motion which encompass the most related structural properties of lightweight elastic manipulators. In this study, two important damping mechanisms, namely, the structural viscoelasticity (Kelvin-Voigt) effect (as internal damping) and the viscous air effect (as external damping) have been considered. To include the effects of shear and rotational inertia, the assumption of Timoshenko beam (TB) theory (TBT) has been applied. Gravity, torsion, and longitudinal elongation effects have also been included in the formulations. To systematically derive the equations of motion and improve the computational efficiency, a recursive algorithm has been used in the modeling of the system. In this algorithm, all the mathematical operations are carried out by only 3×3 and 3×1 matrices. Finally, a computational simulation for a manipulator with two elastic links is performed in order to verify the proposed method.
Joint Bayesian Component Separation and CMB Power Spectrum Estimation
NASA Technical Reports Server (NTRS)
Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.
2008-01-01
We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Sampling and counting genome rearrangement scenarios
2015-01-01
Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher
2016-01-01
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429
Reconstructing signals from noisy data with unknown signal and noise covariance.
Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A
2011-10-01
We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.
Dasgupta, Nilanjan; Carin, Lawrence
2005-04-01
Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.
Constructing Topic Models of Internet of Things for Information Processing
Xin, Jie; Cui, Zhiming; Zhang, Shukui; He, Tianxu; Li, Chunhua; Huang, Haojing
2014-01-01
Internet of Things (IoT) is regarded as a remarkable development of the modern information technology. There is abundant digital products data on the IoT, linking with multiple types of objects/entities. Those associated entities carry rich information and usually in the form of query records. Therefore, constructing high quality topic hierarchies that can capture the term distribution of each product record enables us to better understand users' search intent and benefits tasks such as taxonomy construction, recommendation systems, and other communications solutions for the future IoT. In this paper, we propose a novel record entity topic model (RETM) for IoT environment that is associated with a set of entities and records and a Gibbs sampling-based algorithm is proposed to learn the model. We conduct extensive experiments on real-world datasets and compare our approach with existing methods to demonstrate the advantage of our approach. PMID:25110737
Constructing topic models of Internet of Things for information processing.
Xin, Jie; Cui, Zhiming; Zhang, Shukui; He, Tianxu; Li, Chunhua; Huang, Haojing
2014-01-01
Internet of Things (IoT) is regarded as a remarkable development of the modern information technology. There is abundant digital products data on the IoT, linking with multiple types of objects/entities. Those associated entities carry rich information and usually in the form of query records. Therefore, constructing high quality topic hierarchies that can capture the term distribution of each product record enables us to better understand users' search intent and benefits tasks such as taxonomy construction, recommendation systems, and other communications solutions for the future IoT. In this paper, we propose a novel record entity topic model (RETM) for IoT environment that is associated with a set of entities and records and a Gibbs sampling-based algorithm is proposed to learn the model. We conduct extensive experiments on real-world datasets and compare our approach with existing methods to demonstrate the advantage of our approach.
Hemingway, B.S.; Robie, R.A.; Apps, J.A.
1991-01-01
Heat capacity measurements are reported for a well-characterized boehmite that differ significantly from results reported earlier by Shomate and Cook (1946) for a monohydrate of alumina. It is suggested that the earlier measurements were made on a sample that was a mixture of phases and that use of that heat-capacity and derived thermodynamic data be discontinued. The entropy of boehmite derived in this study is 37.19 ?? 0.10 J/(mol.K) at 298.15 K. Based on our value for the entropy and accepting the recommended Gibbs free energy for Al(OH)-4, the Gibbs free energy and enthalpy of formation of boehmite are calculated to be -918.4 ?? 2.1 and -996.4 ?? 2.2 kJ/mol, respectively, from solubility data for boehmite. The Gibbs energy for boehmite is unchanged from that given by Hemingway et al. (1978). -from Authors
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.
2016-10-01
We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
A bayesian hierarchical model for classification with selection of functional predictors.
Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D
2010-06-01
In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.
An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1988-01-01
An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.
Quantum Gibbs Samplers: The Commuting Case
NASA Astrophysics Data System (ADS)
Kastoryano, Michael J.; Brandão, Fernando G. S. L.
2016-06-01
We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertholon, François; Harant, Olivier; Bourlon, Bertrand
This article introduces a joined Bayesian estimation of gas samples issued from a gas chromatography column (GC) coupled with a NEMS sensor based on Giddings Eyring microscopic molecular stochastic model. The posterior distribution is sampled using a Monte Carlo Markov Chain and Gibbs sampling. Parameters are estimated using the posterior mean. This estimation scheme is finally applied on simulated and real datasets using this molecular stochastic forward model.
2018-04-01
systems containing ionized gases. 2. Gibbs Method in the Integral Form As per the Gibbs general methodology , based on the concept of heterogeneous...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General
Coarse-Grained Models for Automated Fragmentation and Parametrization of Molecular Databases.
Fraaije, Johannes G E M; van Male, Jan; Becherer, Paul; Serral Gracià, Rubèn
2016-12-27
We calibrate coarse-grained interaction potentials suitable for screening large data sets in top-down fashion. Three new algorithms are introduced: (i) automated decomposition of molecules into coarse-grained units (fragmentation); (ii) Coarse-Grained Reference Interaction Site Model-Hypernetted Chain (CG RISM-HNC) as an intermediate proxy for dissipative particle dynamics (DPD); and (iii) a simple top-down coarse-grained interaction potential/model based on activity coefficient theories from engineering (using COSMO-RS). We find that the fragment distribution follows Zipf and Heaps scaling laws. The accuracy in Gibbs energy of mixing calculations is a few tenths of a kilocalorie per mole. As a final proof of principle, we use full coarse-grained sampling through DPD thermodynamics integration to calculate log P OW for 4627 compounds with an average error of 0.84 log unit. The computational speeds per calculation are a few seconds for CG RISM-HNC and a few minutes for DPD thermodynamic integration.
Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression
Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander
2016-01-01
By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143
Reflections on Gibbs: From Statistical Physics to the Amistad V3.0
NASA Astrophysics Data System (ADS)
Kadanoff, Leo P.
2014-07-01
This note is based upon a talk given at an APS meeting in celebration of the achievements of J. Willard Gibbs. J. Willard Gibbs, the younger, was the first American physical sciences theorist. He was one of the inventors of statistical physics. He introduced and developed the concepts of phase space, phase transitions, and thermodynamic surfaces in a remarkably correct and elegant manner. These three concepts form the basis of different areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. This talk therefore celebrated Gibbs by describing modern ideas about how different parts of physics fit together. I finished with a more personal note. Our own J. Willard Gibbs had all his many achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great non-academic achievement that remains unmatched in our day. I describe it.
Unsupervised Bayesian linear unmixing of gene expression microarrays.
Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O
2013-03-19
This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.
Recursive multibody dynamics and discrete-time optimal control
NASA Technical Reports Server (NTRS)
Deleuterio, G. M. T.; Damaren, C. J.
1989-01-01
A recursive algorithm is developed for the solution of the simulation dynamics problem for a chain of rigid bodies. Arbitrary joint constraints are permitted, that is, joints may allow translational and/or rotational degrees of freedom. The recursive procedure is shown to be identical to that encountered in a discrete-time optimal control problem. For each relevant quantity in the multibody dynamics problem, there exists an analog in the context of optimal control. The performance index that is minimized in the control problem is identified as Gibbs' function for the chain of bodies.
Bayesian focalization: quantifying source localization with environmental uncertainty.
Dosso, Stan E; Wilmut, Michael J
2007-05-01
This paper applies a Bayesian formulation to study ocean acoustic source localization as a function of uncertainty in environmental properties (water column and seabed) and of data information content [signal-to-noise ratio (SNR) and number of frequencies]. The approach follows that of the optimum uncertain field processor [A. M. Richardson and L. W. Nolte, J. Acoust. Soc. Am. 89, 2280-2284 (1991)], in that localization uncertainty is quantified by joint marginal probability distributions for source range and depth integrated over uncertain environmental properties. The integration is carried out here using Metropolis Gibbs' sampling for environmental parameters and heat-bath Gibbs' sampling for source location to provide efficient sampling over complicated parameter spaces. The approach is applied to acoustic data from a shallow-water site in the Mediterranean Sea where previous geoacoustic studies have been carried out. It is found that reliable localization requires a sufficient combination of prior (environmental) information and data information. For example, sources can be localized reliably for single-frequency data at low SNR (-3 dB) only with small environmental uncertainties, whereas successful localization with large environmental uncertainties requires higher SNR and/or multifrequency data.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tikhonov, D. A.; Sobolev, E. V.
2011-04-01
A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.
Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.
Zhang, J M; Cui, F C; Hu, Jiangping
2012-04-01
We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.
Quantum Enhanced Inference in Markov Logic Networks
NASA Astrophysics Data System (ADS)
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Quantum Enhanced Inference in Markov Logic Networks
Wittek, Peter; Gogolin, Christian
2017-01-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning. PMID:28422093
NASA Astrophysics Data System (ADS)
Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.
2012-12-01
In this work, an in silico procedure to generate a fully coherent set of thermodynamic properties obtained from classical molecular dynamics (MD) and Monte Carlo (MC) simulations is proposed. The procedure is applied to the Al-Zr system because of its importance in the development of high strength Al-Li alloys and of bulk metallic glasses. Cohesive energies of the studied condensed phases of the Al-Zr system (the liquid phase, the fcc solid solution, and various orthorhombic stoichiometric compounds) are calculated using the modified embedded atom model (MEAM) in the second-nearest-neighbor formalism (2NN). The Al-Zr MEAM-2NN potential is parameterized in this work using ab initio and experimental data found in the literature for the AlZr3-L12 structure, while its predictive ability is confirmed for several other solid structures and for the liquid phase. The thermodynamic integration (TI) method is implemented in a general MC algorithm in order to evaluate the absolute Gibbs energy of the liquid and the fcc solutions. The entropy of mixing calculated from the TI method, combined to the enthalpy of mixing and the heat capacity data generated from MD/MC simulations performed in the isobaric-isothermal/canonical (NPT/NVT) ensembles are used to parameterize the Gibbs energy function of all the condensed phases in the Al-rich side of the Al-Zr system in a CALculation of PHAse Diagrams (CALPHAD) approach. The modified quasichemical model in the pair approximation (MQMPA) and the cluster variation method (CVM) in the tetrahedron approximation are used to define the Gibbs energy of the liquid and the fcc solid solution respectively for their entire range of composition. Thermodynamic and structural data generated from our MD/MC simulations are used as input data to parameterize these thermodynamic models. A detailed analysis of the validity and transferability of the Al-Zr MEAM-2NN potential is presented throughout our work by comparing the predicted properties obtained from this formalism with available ab initio and experimental data for both liquid and solid phases.
Standard Gibbs energy of metabolic reactions: II. Glucose-6-phosphatase reaction and ATP hydrolysis.
Meurer, Florian; Do, Hoang Tam; Sadowski, Gabriele; Held, Christoph
2017-04-01
ATP (adenosine triphosphate) is a key reaction for metabolism. Tools from systems biology require standard reaction data in order to predict metabolic pathways accurately. However, literature values for standard Gibbs energy of ATP hydrolysis are highly uncertain and differ strongly from each other. Further, such data usually neglect the activity coefficients of reacting agents, and published data like this is apparent (condition-dependent) data instead of activity-based standard data. In this work a consistent value for the standard Gibbs energy of ATP hydrolysis was determined. The activity coefficients of reacting agents were modeled with electrolyte Perturbed-Chain Statistical Associating Fluid Theory (ePC-SAFT). The Gibbs energy of ATP hydrolysis was calculated by combining the standard Gibbs energies of hexokinase reaction and of glucose-6-phosphate hydrolysis. While the standard Gibbs energy of hexokinase reaction was taken from previous work, standard Gibbs energy of glucose-6-phosphate hydrolysis reaction was determined in this work. For this purpose, reaction equilibrium molalities of reacting agents were measured at pH7 and pH8 at 298.15K at varying initial reacting agent molalities. The corresponding activity coefficients at experimental equilibrium molalities were predicted with ePC-SAFT yielding the Gibbs energy of glucose-6-phosphate hydrolysis of -13.72±0.75kJ·mol -1 . Combined with the value for hexokinase, the standard Gibbs energy of ATP hydrolysis was finally found to be -31.55±1.27kJ·mol -1 . For both, ATP hydrolysis and glucose-6-phosphate hydrolysis, a good agreement with own and literature values were obtained when influences of pH, temperature, and activity coefficients were explicitly taken into account in order to calculate standard Gibbs energy at pH7, 298.15K and standard state. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
NASA Astrophysics Data System (ADS)
Harvey, Jean-Philippe
In this work, the possibility to calculate and evaluate with a high degree of precision the Gibbs energy of complex multiphase equilibria for which chemical ordering is explicitly and simultaneously considered in the thermodynamic description of solid (short range order and long range order) and liquid (short range order) metallic phases is studied. The cluster site approximation (CSA) and the cluster variation method (CVM) are implemented in a new minimization technique of the Gibbs energy of multicomponent and multiphase systems to describe the thermodynamic behaviour of metallic solid solutions showing strong chemical ordering. The modified quasichemical model in the pair approximation (MQMPA) is also implemented in the new minimization algorithm presented in this work to describe the thermodynamic behaviour of metallic liquid solutions. The constrained minimization technique implemented in this work consists of a sequential quadratic programming technique based on an exact Newton’s method (i.e. the use of exact second derivatives in the determination of the Hessian of the objective function) combined to a line search method to identify a direction of sufficient decrease of the merit function. The implementation of a new algorithm to perform the constrained minimization of the Gibbs energy is justified by the difficulty to identify, in specific cases, the correct multiphase assemblage of a system where the thermodynamic behaviour of the equilibrium phases is described by one of the previously quoted models using the FactSage software (ex.: solid_CSA+liquid_MQMPA; solid1_CSA+solid2_CSA). After a rigorous validation of the constrained Gibbs energy minimization algorithm using several assessed binary and ternary systems found in the literature, the CVM and the CSA models used to describe the energetic behaviour of metallic solid solutions present in systems with key industrial applications such as the Cu-Zr and the Al-Zr systems are parameterized using fully consistent thermodynamic an structural data generated from a Monte Carlo (MC) simulator also implemented in the framework of this project. In this MC simulator, the modified embedded atom model in the second nearest neighbour formalism (MEAM-2NN) is used to describe the cohesive energy of each studied structure. A new Al-Zr MEAM-2NN interatomic potential needed to evaluate the cohesive energy of the condensed phases of this system is presented in this work. The thermodynamic integration (TI) method implemented in the MC simulator allows the evaluation of the absolute Gibbs energy of the considered solid or liquid structures. The original implementation of the TI method allowed us to evaluate theoretically for the first time all the thermodynamic mixing contributions (i.e., mixing enthalpy and mixing entropy contributions) of a metallic liquid (Cu-Zr and Al-Zr) and of a solid solution (face-centered cubic (FCC) Al-Zr solid solution) described by the MEAM-2NN. Thermodynamic and structural data obtained from MC and molecular dynamic simulations are then used to parameterize the CVM for the Al-Zr FCC solid solution and the MQMPA for the Al-Zr and the Cu-Zr liquid phase respectively. The extended thermodynamic study of these systems allow the introduction of a new type of configuration-dependent excess parameters in the definition of the thermodynamic function of solid solutions described by the CVM or the CSA. These parameters greatly improve the precision of these thermodynamic models based on experimental evidences found in the literature. A new parameterization approach of the MQMPA model of metallic liquid solutions is presented throughout this work. In this new approach, calculated pair fractions obtained from MC/MD simulations are taken into account as well as configuration-independent volumetric relaxation effects (regular like excess parameters) in order to parameterize precisely the Gibbs energy function of metallic melts. The generation of a complete set of fully consistent thermodynamic, physical and structural data for solid, liquid, and stoichiometric compounds and the subsequent parameterization of their respective thermodynamic model lead to the first description of the complete Al-Zr phase diagram in the range of composition [0 ≤ XZr ≤ 5 / 9] based on theoretical and fully consistent thermodynamic properties. MC and MD simulations are performed for the Al-Zr system to define for the first time the precise thermodynamic behaviour of the amorphous phase for its entire range of composition. Finally, all the thermodynamic models for the liquid phase, the FCC solid solution and the amorphous phase are used to define conditions based on thermodynamic and volumetric considerations that favor the amorphization of Al-Zr alloys.
On grey levels in random CAPTCHA generation
NASA Astrophysics Data System (ADS)
Newton, Fraser; Kouritzin, Michael A.
2011-06-01
A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.
Genome-wide regression and prediction with the BGLR statistical package.
Pérez, Paulino; de los Campos, Gustavo
2014-10-01
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.
A formula for the entropy of the convolution of Gibbs probabilities on the circle
NASA Astrophysics Data System (ADS)
Lopes, Artur O.
2018-07-01
Consider the transformation , such that (mod 1), and where S 1 is the unitary circle. Suppose is Hölder continuous and positive, and moreover that, for any , we have that We say that ρ is a Gibbs probability for the Hölder continuous potential , if where is the Ruelle operator for . We call J the Jacobian of ρ. Suppose is the convolution of two Gibbs probabilities and associated, respectively, to and . We show that ν is also Gibbs and its Jacobian is given by . In this case, the entropy is given by the expression For a fixed we consider differentiable variations , , of on the Banach manifold of Gibbs probabilities, where , and we estimate the derivative of the entropy at t = 0. We also present an expression for the Jacobian of the convolution of a Gibbs probability ρ with the invariant probability with support on a periodic orbit of period two. This expression is based on the Jacobian of ρ and two Radon–Nidodym derivatives.
A probabilistic model for detecting rigid domains in protein structures.
Nguyen, Thach; Habeck, Michael
2016-09-01
Large-scale conformational changes in proteins are implicated in many important biological functions. These structural transitions can often be rationalized in terms of relative movements of rigid domains. There is a need for objective and automated methods that identify rigid domains in sets of protein structures showing alternative conformational states. We present a probabilistic model for detecting rigid-body movements in protein structures. Our model aims to approximate alternative conformational states by a few structural parts that are rigidly transformed under the action of a rotation and a translation. By using Bayesian inference and Markov chain Monte Carlo sampling, we estimate all parameters of the model, including a segmentation of the protein into rigid domains, the structures of the domains themselves, and the rigid transformations that generate the observed structures. We find that our Gibbs sampling algorithm can also estimate the optimal number of rigid domains with high efficiency and accuracy. We assess the power of our method on several thousand entries of the DynDom database and discuss applications to various complex biomolecular systems. The Python source code for protein ensemble analysis is available at: https://github.com/thachnguyen/motion_detection : mhabeck@gwdg.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y
2004-10-01
Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.
Order-Constrained Bayes Inference for Dichotomous Models of Unidimensional Nonparametric IRT
ERIC Educational Resources Information Center
Karabatsos, George; Sheu, Ching-Fan
2004-01-01
This study introduces an order-constrained Bayes inference framework useful for analyzing data containing dichotomous scored item responses, under the assumptions of either the monotone homogeneity model or the double monotonicity model of nonparametric item response theory (NIRT). The framework involves the implementation of Gibbs sampling to…
The Gibbs Phenomenon for Series of Orthogonal Polynomials
ERIC Educational Resources Information Center
Fay, T. H.; Kloppers, P. Hendrik
2006-01-01
This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…
Enzyme Catalysis and the Gibbs Energy
ERIC Educational Resources Information Center
Ault, Addison
2009-01-01
Gibbs-energy profiles are often introduced during the first semester of organic chemistry, but are less often presented in connection with enzyme-catalyzed reactions. In this article I show how the Gibbs-energy profile corresponds to the characteristic kinetics of a simple enzyme-catalyzed reaction. (Contains 1 figure and 1 note.)
Determination of Gibbs energies of formation in aqueous solution using chemical engineering tools.
Toure, Oumar; Dussap, Claude-Gilles
2016-08-01
Standard Gibbs energies of formation are of primary importance in the field of biothermodynamics. In the absence of any directly measured values, thermodynamic calculations are required to determine the missing data. For several biochemical species, this study shows that the knowledge of the standard Gibbs energy of formation of the pure compounds (in the gaseous, solid or liquid states) enables to determine the corresponding standard Gibbs energies of formation in aqueous solutions. To do so, using chemical engineering tools (thermodynamic tables and a model enabling to predict activity coefficients, solvation Gibbs energies and pKa data), it becomes possible to determine the partial chemical potential of neutral and charged components in real metabolic conditions, even in concentrated mixtures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Calculating the True and Observed Rates of Complex Heterogeneous Catalytic Reactions
NASA Astrophysics Data System (ADS)
Avetisov, A. K.; Zyskin, A. G.
2018-06-01
Equations of the theory of steady-state complex reactions are considered in matrix form. A set of stage stationarity equations is given, and an algorithm is described for deriving the canonic set of stationarity equations with appropriate corrections for the existence of fast stages in a mechanism. A formula for calculating the number of key compounds is presented. The applicability of the Gibbs rule to estimating the number of independent compounds in a complex reaction is analyzed. Some matrix equations relating the rates of dependent and key substances are derived. They are used as a basis to determine the general diffusion stoichiometry relationships between temperature, the concentrations of dependent reaction participants, and the concentrations of key reaction participants in a catalyst grain. An algorithm is described for calculating heat and mass transfer in a catalyst grain with respect to arbitrary complex heterogeneous catalytic reactions.
A CONTINUATION OF REMEDIATION OF BRINE SPILLS WITH HAY
First order rate constants for salt removal are shown in Table 1. For Gibbs 7, tilling with hay and fertilizers proved to be the best treatment for salt removal (80% confidence level, CL). For Gibbs 9, which is rockier than Gibbs 7, tilling was the best treatment for salt remo...
Masuda, Yosuke; Yamaotsu, Noriyuki; Hirono, Shuichi
2017-01-01
In order to predict the potencies of mechanism-based reversible covalent inhibitors, the relationships between calculated Gibbs free energy of hydrolytic water molecule in acyl-trypsin intermediates and experimentally measured catalytic rate constants (k cat ) were investigated. After obtaining representative solution structures by molecular dynamics (MD) simulations, hydration thermodynamics analyses using WaterMap™ were conducted. Consequently, we found for the first time that when Gibbs free energy of the hydrolytic water molecule was lower, logarithms of k cat were also lower. The hydrolytic water molecule with favorable Gibbs free energy may hydrolyze acylated serine slowly. Gibbs free energy of hydrolytic water molecule might be a useful descriptor for computer-aided discovery of mechanism-based reversible covalent inhibitors of hydrolytic enzymes.
ERIC Educational Resources Information Center
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
Three-dimensional Probabilistic Earthquake Location Applied to 2002-2003 Mt. Etna Eruption
NASA Astrophysics Data System (ADS)
Mostaccio, A.; Tuve', T.; Zuccarello, L.; Patane', D.; Saccorotti, G.; D'Agostino, M.
2005-12-01
Recorded seismicity for the Mt. Etna volcano, occurred during the 2002-2003 eruption, has been relocated using a probabilistic, non-linear, earthquake location approach. We used the software package NonLinLoc (Lomax et al., 2000) adopting the 3D velocity model obtained by Cocina et al., 2005. We applied our data through different algorithms: (1) via a grid-search; (2) via a Metropolis-Gibbs; and (3) via an Oct-tree. The Oct-Tree algorithm gives efficient, faster and accurate mapping of the PDF (Probability Density Function) of the earthquake location problem. More than 300 seismic events were analyzed in order to compare non-linear location results with the ones obtained by using traditional, linearized earthquake location algorithm such as Hypoellipse, and a 3D linearized inversion (Thurber, 1983). Moreover, we compare 38 focal mechanisms, chosen following stricta criteria selection, with the ones obtained by the 3D and 1D results. Although the presented approach is more of a traditional relocation application, probabilistic earthquake location could be used in routinely survey.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
Reflections on Gibbs: From Critical Phenomena to the Amistad
NASA Astrophysics Data System (ADS)
Kadanoff, Leo P.
2003-03-01
J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.
Extension of Gibbs-Duhem equation including influences of external fields
NASA Astrophysics Data System (ADS)
Guangze, Han; Jianjia, Meng
2018-03-01
Gibbs-Duhem equation is one of the fundamental equations in thermodynamics, which describes the relation among changes in temperature, pressure and chemical potential. Thermodynamic system can be affected by external field, and this effect should be revealed by thermodynamic equations. Based on energy postulate and the first law of thermodynamics, the differential equation of internal energy is extended to include the properties of external fields. Then, with homogeneous function theorem and a redefinition of Gibbs energy, a generalized Gibbs-Duhem equation with influences of external fields is derived. As a demonstration of the application of this generalized equation, the influences of temperature and external electric field on surface tension, surface adsorption controlled by external electric field, and the derivation of a generalized chemical potential expression are discussed, which show that the extended Gibbs-Duhem equation developed in this paper is capable to capture the influences of external fields on a thermodynamic system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
ERIC Educational Resources Information Center
Bozlee, Brian J.
2007-01-01
The impact of raising Gibbs energy of the enzyme-substrate complex (G[subscript 3]) and the reformulation of the Michaelis-Menten equation are discussed. The maximum velocity of the reaction (v[subscript m]) and characteristic constant for the enzyme (K[subscript M]) will increase with increase in Gibbs energy, indicating that the rate of reaction…
Illustrating the Effect of pH on Enzyme Activity Using Gibbs Energy Profiles
ERIC Educational Resources Information Center
Bearne, Stephen L.
2014-01-01
Gibbs energy profiles provide students with a visual representation of the energy changes that occur during enzyme catalysis, making such profiles useful as teaching and learning tools. Traditional kinetic topics, such as the effect of pH on enzyme activity, are often not discussed in terms of Gibbs energy profiles. Herein, the symbolism of Gibbs…
Marangoni and Gibbs elasticity of flowing soap films
NASA Astrophysics Data System (ADS)
Kim, Ildoo; Sane, Aakash; Mandre, Shreyas
2017-11-01
A flowing soap film has two elasticities. Marangoni elasticity dynamically stabilizes the film from sudden disturbance, and Gibbs elasticity is an equilibrium property that influences the film's persistence over time. In our experimental investigation, we find that Marangoni elasticity is 22 mN/m independent of the film thickness. On the other hand, Gibbs elasticity depends both on the film thickness and the soap concentration. Interestingly, the soap film made of dilute soap solution has the greater Gibbs elasticity, which is not consistent to the existing theory. Such discrepancy is originated from the flowing nature of our soap films, in which surfactants are continuously replenished.
Diversity among elephant grass genotypes using Bayesian multi-trait model.
Rossi, D A; Daher, R F; Barbé, T C; Lima, R S N; Costa, A F; Ribeiro, L P; Teodoro, P E; Bhering, L L
2017-09-27
Elephant grass is a perennial tropical grass with great potential for energy generation from biomass. The objective of this study was to estimate the genetic diversity among elephant grass accessions based on morpho-agronomic and biomass quality traits and to identify promising genotypes for obtaining hybrids with high energetic biomass production capacity. The experiment was installed at experimental area of the State Agricultural College Antônio Sarlo, in Campos dos Goytacazes. Fifty-two elephant grass genotypes were evaluated in a randomized block design with two replicates. Components of variance and the genotypic means were obtained using a Bayesian multi-trait model. We considered 350,000 iterations in the Gibbs sampler algorithm for each parameter adopted, with a warm-up period (burn-in) of 50,000 Iterations. For obtaining an uncorrelated sample, we considered five iterations (thinning) as a spacing between sampled points, which resulted in a final sample size 60,000. Subsequently, the Mahalanobis distance between each pair of genotypes was estimated. Estimates of genotypic variance indicated a favorable condition for gains in all traits. Elephant grass accessions presented greater variability for biomass quality traits, for which three groups were formed, while for the agronomic traits, two groups were formed. Crosses between Mercker Pinda México x Mercker 86-México, Mercker Pinda México x Turrialba, and Mercker 86-México x Taiwan A-25 can be carried out for obtaining elephant grass hybrids for energy purposes.
Disordered crystals from first principles I: Quantifying the configuration space
NASA Astrophysics Data System (ADS)
Kühne, Thomas D.; Prodan, Emil
2018-04-01
This work represents the first chapter of a project on the foundations of first-principle calculations of the electron transport in crystals at finite temperatures. We are interested in the range of temperatures, where most electronic components operate, that is, room temperature and above. The aim is a predictive first-principle formalism that combines ab-initio molecular dynamics and a finite-temperature Kubo-formula for homogeneous thermodynamic phases. The input for this formula is the ergodic dynamical system (Ω , G , dP) defining the thermodynamic crystalline phase, where Ω is the configuration space for the atomic degrees of freedom, G is the space group acting on Ω and dP is the ergodic Gibbs measure relative to the G-action. The present work develops an algorithmic method for quantifying (Ω , G , dP) from first principles. Using the silicon crystal as a working example, we find the Gibbs measure to be extremely well characterized by a multivariate normal distribution, which can be quantified using a small number of parameters. The latter are computed at various temperatures and communicated in the form of a table. Using this table, one can generate large and accurate thermally-disordered atomic configurations to serve, for example, as input for subsequent simulations of the electronic degrees of freedom.
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert
2012-08-01
Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.
Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B.; Chen, Li; Wang, Yue; Clarke, Robert
2012-01-01
Motivation: Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive ‘noise’ in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. Results: In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. Availability and implementation: The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22595208
ERIC Educational Resources Information Center
Gary, Ronald K.
2004-01-01
The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Modeling adsorption of cationic surfactants at air/water interface without using the Gibbs equation.
Phan, Chi M; Le, Thu N; Nguyen, Cuong V; Yusa, Shin-ichi
2013-04-16
The Gibbs adsorption equation has been indispensable in predicting the surfactant adsorption at the interfaces, with many applications in industrial and natural processes. This study uses a new theoretical framework to model surfactant adsorption at the air/water interface without the Gibbs equation. The model was applied to two surfactants, C14TAB and C16TAB, to determine the maximum surface excesses. The obtained values demonstrated a fundamental change, which was verified by simulations, in the molecular arrangement at the interface. The new insights, in combination with recent discoveries in the field, expose the limitations of applying the Gibbs adsorption equation to cationic surfactants at the air/water interface.
Chemical potential, Gibbs-Duhem equation and quantum gases
NASA Astrophysics Data System (ADS)
Lee, M. Howard
2017-05-01
Thermodynamic relations like the Gibbs-Duhem are valid from the lowest to the highest temperatures. But they cannot by themselves provide any specific temperature behavior of thermodynamic functions like the chemical potential. In this work, we show that if some general conditions are attached to the Gibbs-Duhem equation, it is possible to obtain the low temperature form of the chemical potential for the ideal Fermi and Bose gases very directly.
Pan-STARRS 1 observations of the unusual active Centaur P/2011 S1(Gibbs)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, H. W.; Ip, W. H.; Chen, W. P.
2014-05-01
P/2011 S1 (Gibbs) is an outer solar system comet or active Centaur with a similar orbit to that of the famous 29P/Schwassmann-Wachmann 1. P/2011 S1 (Gibbs) has been observed by the Pan-STARRS 1 (PS1) sky survey from 2010 to 2012. The resulting data allow us to perform multi-color studies of the nucleus and coma of the comet. Analysis of PS1 images reveals that P/2011 S1 (Gibbs) has a small nucleus <4 km radius, with colors g {sub P1} – r {sub P1} = 0.5 ± 0.02, r {sub P1} – i {sub P1} = 0.12 ± 0.02, and i {submore » P1} – z {sub P1} = 0.46 ± 0.03. The comet remained active from 2010 to 2012, with a model-dependent mass-loss rate of ∼100 kg s{sup –1}. The mass-loss rate per unit surface area of P/2011 S1 (Gibbs) is as high as that of 29P/Schwassmann-Wachmann 1, making it one of the most active Centaurs. The mass-loss rate also varies with time from ∼40 kg s{sup –1} to 150 kg s{sup –1}. Due to its rather circular orbit, we propose that P/2011 S1 (Gibbs) has 29P/Schwassmann-Wachmann 1-like outbursts that control the outgassing rate. The results indicate that it may have a similar surface composition to that of 29P/Schwassmann-Wachmann 1. Our numerical simulations show that the future orbital evolution of P/2011 S1 (Gibbs) is more similar to that of the main population of Centaurs than to that of 29P/Schwassmann-Wachmann 1. The results also demonstrate that P/2011 S1 (Gibbs) is dynamically unstable and can only remain near its current orbit for roughly a thousand years.« less
A Bayesian state-space approach for damage detection and classification
NASA Astrophysics Data System (ADS)
Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.
2017-11-01
The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.
NASA Astrophysics Data System (ADS)
Kuhn, J.; Kesler, O.
2015-03-01
For the second part of a two part publication, coking thresholds with respect to molar steam:carbon ratio (SC) and current density in nickel-based solid oxide fuel cells were determined. Anode-supported button cell samples were exposed to 2-component and 5-component gas mixtures with 1 ≤ SC ≤ 2 and zero fuel utilization for 10 h, followed by measurement of the resulting carbon mass. The effect of current density was explored by measuring carbon mass under conditions known to be prone to coking while increasing the current density until the cell was carbon-free. The SC coking thresholds were measured to be ∼1.04 and ∼1.18 at 600 and 700 °C, respectively. Current density experiments validated the thresholds measured with respect to fuel utilization and steam:carbon ratio. Coking thresholds at 600 °C could be predicted with thermodynamic equilibrium calculations when the Gibbs free energy of carbon was appropriately modified. Here, the Gibbs free energy of carbon on nickel-based anode support cermets was measured to be -6.91 ± 0.08 kJ mol-1. The results of this two part publication show that thermodynamic equilibrium calculations with appropriate modification to the Gibbs free energy of solid-phase carbon can be used to predict coking thresholds on nickel-based anodes at 600-700 °C.
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…
2006-10-05
the likely existence of a small foreshock . 2. BACKGROUND 2.1. InSAR The most well-known examples of InSAR used as a geodetic tool involve...the event. We have used the seismic waveforms in the Sultan Dag event to identify a small foreshock preceding the main shock by about 3 seconds
Supervised Gamma Process Poisson Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dylan Zachary
This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling andmore » several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.« less
Gas-liquid coexistence for the boson square-well fluid and the (4)He binodal anomaly.
Fantoni, Riccardo
2014-08-01
The binodal of a boson square-well fluid is determined as a function of the particle mass through a quantum Gibbs ensemble Monte Carlo algorithm devised by R. Fantoni and S. Moroni [J. Chem. Phys. (to be published)]. In the infinite mass limit we recover the classical result. As the particle mass decreases, the gas-liquid critical point moves at lower temperatures. We explicitly study the case of a quantum delocalization de Boer parameter close to the one of (4)He. For comparison, we also determine the gas-liquid coexistence curve of (4)He for which we are able to observe the binodal anomaly below the λ-transition temperature.
Quantum Gibbs ensemble Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Li, Rongjin; Zhang, Xiaotao; Dong, Huanli; Li, Qikai; Shuai, Zhigang; Hu, Wenping
2016-02-24
The equilibrium crystal shape and shape evolution of organic crystals are found to follow the Gibbs-Curie-Wulff theorem. Organic crystals are grown by the physical vapor transport technique and exhibit exactly the same shape as predicted by the Gibbs-Curie-Wulff theorem under optimal conditions. This accordance provides concrete proof for the theorem. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
GLASS VISCOSITY AS A FUNCTION OF TEMPERATURE AND COMPOSITION: A MODEL BASED ON ADAM-GIBBS EQUATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrma, Pavel R.
2008-07-01
Within the temperature range and composition region of processing and product forming, the viscosity of commercial and waste glasses spans over 12 orders of magnitude. This paper shows that a generalized Adam-Gibbs relationship reasonably approximates the real behavior of glasses with four temperature-independent parameters of which two are linear functions of the composition vector. The equation is subjected to two constraints, one requiring that the viscosity-temperature relationship approaches the Arrhenius function at high temperatures with a composition-independent pre-exponential factor and the other that the viscosity value is independent of composition at the glass-transition temperature. Several sets of constant coefficients weremore » obtained by fitting the generalized Adam-Gibbs equation to data of two glass families: float glass and Hanford waste glass. Other equations (the Vogel-Fulcher-Tammann equation, original and modified, the Avramov equation, and the Douglass-Doremus equation) were fitted to float glass data series and compared with the Adam-Gibbs equation, showing that Adam-Gibbs glass appears an excellent approximation of real glasses even as compared with other candidate constitutive relations.« less
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Q-Space Truncation and Sampling in Diffusion Spectrum Imaging
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.
2015-01-01
Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.
2010-01-01
I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Genomic analysis of cow mortality and milk production using a threshold-linear model.
Tsuruta, S; Lourenco, D A L; Misztal, I; Lawlor, T J
2017-09-01
The objective of this study was to investigate the feasibility of genomic evaluation for cow mortality and milk production using a single-step methodology. Genomic relationships between cow mortality and milk production were also analyzed. Data included 883,887 (866,700) first-parity, 733,904 (711,211) second-parity, and 516,256 (492,026) third-parity records on cow mortality (305-d milk yields) of Holsteins from Northeast states in the United States. The pedigree consisted of up to 1,690,481 animals including 34,481 bulls genotyped with 36,951 SNP markers. Analyses were conducted with a bivariate threshold-linear model for each parity separately. Genomic information was incorporated as a genomic relationship matrix in the single-step BLUP. Traditional and genomic estimated breeding values (GEBV) were obtained with Gibbs sampling using fixed variances, whereas reliabilities were calculated from variances of GEBV samples. Genomic EBV were then converted into single nucleotide polymorphism (SNP) marker effects. Those SNP effects were categorized according to values corresponding to 1 to 4 standard deviations. Moving averages and variances of SNP effects were calculated for windows of 30 adjacent SNP, and Manhattan plots were created for SNP variances with the same window size. Using Gibbs sampling, the reliability for genotyped bulls for cow mortality was 28 to 30% in EBV and 70 to 72% in GEBV. The reliability for genotyped bulls for 305-d milk yields was 53 to 65% to 81 to 85% in GEBV. Correlations of SNP effects between mortality and 305-d milk yields within categories were the highest with the largest SNP effects and reached >0.7 at 4 standard deviations. All SNP regions explained less than 0.6% of the genetic variance for both traits, except regions close to the DGAT1 gene, which explained up to 2.5% for cow mortality and 4% for 305-d milk yields. Reliability for GEBV with a moderate number of genotyped animals can be calculated by Gibbs samples. Genomic information can greatly increase the reliability of predictions not only for milk but also for mortality. The existence of a common region on Bos taurus autosome 14 affecting both traits may indicate a major gene with a pleiotropic effect on milk and mortality. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Thermodynamic properties of adsorption and micellization of n-oktyl-β-D-glucopiranoside.
Mańko, Diana; Zdziennicka, Anna; Jańczuk, Bronisław
2014-02-01
Measurements of the surface tension, density and viscosity of aqueous solutions of n-oktyl-β-D-glucopiranoside (OGP) were made at 293 K. From the obtained results the Gibbs surface excess concentration of OGP at the water-air interface and its critical micelle concentration were determined. The Gibbs surface excess concentration of OGP used in the Gu and Zhu isotherm equation allowed us to determine the Gibbs standard free energy of OGP adsorption at the water-air interface. The Gibbs standard free energy of OGP adsorption was also determined on the basis of the Langmuir, Szyszkowski, Gamboa and Olea equations as well the surface tension of "hydrophobic" part of OGP and "hydrophobic" part-water interface tension. It appeared that there is an agreement between the values of Gibbs standard free energy of OGP adsorption at the water-air interface determined by using all the above mentioned methods. It also proved that standard free energy of OGP micellization determined from CMC is consistent with that obtained on the basis of the free energy of the interactions between the "hydrophobic" part of the OPG through the water phase. Copyright © 2013 Elsevier B.V. All rights reserved.
Mesohysteresis model for ferromagnetic materials by minimization of the micromagnetic free energy
NASA Astrophysics Data System (ADS)
van den Berg, A.; Dupré, L.; Van de Wiele, B.; Crevecoeur, G.
2009-04-01
To study the connection between macroscopic hysteretic behavior and the microstructural properties, this paper presents and validates a new material dependent three-dimensional mesoscopic magnetic hysteresis model. In the presented mesoscopic description, the different micromagnetic energy terms are reformulated on the space scale of the magnetic domains. The sample is discretized in cubic cells, each with a local stress state, local bcc crystallographic axes, etc. The magnetization is assumed to align with one of the three crystallographic axes, in positive or negative sense, defining six volume fractions within each cell. The micromagnetic Gibbs free energy is described in terms of these volume fractions. Hysteresis loops are computed by minimizing the mesoscopic Gibbs free energy using a modified gradient search for a sequence of external applied fields. To validate the mesohysteresis model, we studied the magnetic memory properties. Numerical experiments reveal that (1) minor hysteresis loops are indeed closed and (2) the closed minor loops are erased from the memory.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
Weakly Nonergodic Dynamics in the Gross-Pitaevskii Lattice
NASA Astrophysics Data System (ADS)
Mithun, Thudiyangal; Kati, Yagmur; Danieli, Carlo; Flach, Sergej
2018-05-01
The microcanonical Gross-Pitaevskii (also known as the semiclassical Bose-Hubbard) lattice model dynamics is characterized by a pair of energy and norm densities. The grand canonical Gibbs distribution fails to describe a part of the density space, due to the boundedness of its kinetic energy spectrum. We define Poincaré equilibrium manifolds and compute the statistics of microcanonical excursion times off them. The tails of the distribution functions quantify the proximity of the many-body dynamics to a weakly nonergodic phase, which occurs when the average excursion time is infinite. We find that a crossover to weakly nonergodic dynamics takes place inside the non-Gibbs phase, being unnoticed by the largest Lyapunov exponent. In the ergodic part of the non-Gibbs phase, the Gibbs distribution should be replaced by an unknown modified one. We relate our findings to the corresponding integrable limit, close to which the actions are interacting through a short range coupling network.
Time-dependent generalized Gibbs ensembles in open quantum systems
NASA Astrophysics Data System (ADS)
Lange, Florian; Lenarčič, Zala; Rosch, Achim
2018-04-01
Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.
Gibbs Energy Modeling of Digenite and Adjacent Solid-State Phases
NASA Astrophysics Data System (ADS)
Waldner, Peter
2017-08-01
All sulfur potential and phase diagram data available in the literature for solid-state equilibria related to digenite have been assessed. Thorough thermodynamic analysis at 1 bar total pressure has been performed. A three-sublattice approach has been developed to model the Gibbs energy of digenite as a function of composition and temperature using the compound energy formalism. The Gibbs energies of the adjacent solid-state phases covelitte and high-temperature chalcocite are also modeled treating both sulfides as stoichiometric compounds. The novel model for digenite offers new interpretation of experimental data, may contribute from a thermodynamic point of view to the elucidation of the role of copper species within the crystal structure and allows extrapolation to composition regimes richer in copper than stoichiometric digenite Cu2S. Preliminary predictions into the ternary Cu-Fe-S system at 1273 K (1000 °C) using the Gibbs energy model of digenite for calculating its iron solubility are promising.
Thermodynamics of Bioreactions.
Held, Christoph; Sadowski, Gabriele
2016-06-07
Thermodynamic principles have been applied to enzyme-catalyzed reactions since the beginning of the 1930s in an attempt to understand metabolic pathways. Currently, thermodynamics is also applied to the design and analysis of biotechnological processes. The key thermodynamic quantity is the Gibbs energy of reaction, which must be negative for a reaction to occur spontaneously. However, the application of thermodynamic feasibility studies sometimes yields positive Gibbs energies of reaction even for reactions that are known to occur spontaneously, such as glycolysis. This article reviews the application of thermodynamics in enzyme-catalyzed reactions. It summarizes the basic thermodynamic relationships used for describing the Gibbs energy of reaction and also refers to the nonuniform application of these relationships in the literature. The review summarizes state-of-the-art approaches that describe the influence of temperature, pH, electrolytes, solvents, and concentrations of reacting agents on the Gibbs energy of reaction and, therefore, on the feasibility and yield of biological reactions.
Gibbs measures with memory of length 2 on an arbitrary-order Cayley tree
NASA Astrophysics Data System (ADS)
Akın, Hasan
In this paper, we consider the Ising-Vanniminus model on an arbitrary-order Cayley tree. We generalize the results conjectured by Akın [Chinese J. Phys. 54(4), 635-649 (2016) and Int. J. Mod. Phys. B 31(13), 1750093 (2017)] for an arbitrary-order Cayley tree. We establish the existence and a full classification of translation-invariant Gibbs measures (TIGMs) with a memory of length 2 associated with the model on arbitrary-order Cayley tree. We construct the recurrence equations corresponding to the generalized ANNNI model. We satisfy the Kolmogorov consistency condition. We propose a rigorous measure-theoretical approach to investigate the Gibbs measures with a memory of length 2 for the model. We explain if the number of branches of the tree does not change the number of Gibbs measures. Also, we try to determine when the phase transition does occur.
Generalized thermalization for integrable system under quantum quench.
Muralidharan, Sushruth; Lochan, Kinjalk; Shankaranarayanan, S
2018-01-01
We investigate equilibration and generalized thermalization of the quantum Harmonic chain under local quantum quench. The quench action we consider is connecting two disjoint harmonic chains of different sizes and the system jumps between two integrable settings. We verify the validity of the generalized Gibbs ensemble description for this infinite-dimensional Hilbert space system and also identify equilibration between the subsystems as in classical systems. Using Bogoliubov transformations, we show that the eigenstates of the system prior to the quench evolve toward the Gibbs Generalized Ensemble description. Eigenstates that are more delocalized (in the sense of inverse participation ratio) prior to the quench, tend to equilibrate more rapidly. Further, through the phase space properties of a generalized Gibbs ensemble and the strength of stimulated emission, we identify the necessary criterion on the initial states for such relaxation at late times and also find out the states that would potentially not be described by the generalized Gibbs ensemble description.
NASA Astrophysics Data System (ADS)
Ahmad, Mohd Ali Khameini; Liao, Lingmin; Saburov, Mansoor
2018-06-01
We study the set of p-adic Gibbs measures of the q-state Potts model on the Cayley tree of order three. We prove the vastness of the set of the periodic p-adic Gibbs measures for such model by showing the chaotic behavior of the corresponding Potts-Bethe mapping over Q_p for the prime numbers p≡1 (mod 3). In fact, for 0< |θ -1|_p< |q|_p^2 < 1 where θ =\\exp _p(J) and J is a coupling constant, there exists a subsystem that is isometrically conjugate to the full shift on three symbols. Meanwhile, for 0< |q|_p^2 ≤ |θ -1|_p< |q|_p < 1, there exists a subsystem that is isometrically conjugate to a subshift of finite type on r symbols where r ≥ 4. However, these subshifts on r symbols are all topologically conjugate to the full shift on three symbols. The p-adic Gibbs measures of the same model for the prime numbers p=2,3 and the corresponding Potts-Bethe mapping are also discussed. On the other hand, for 0< |θ -1|_p< |q|_p < 1, we remark that the Potts-Bethe mapping is not chaotic when p=3 and p≡ 2 (mod 3) and we could not conclude the vastness of the set of the periodic p-adic Gibbs measures. In a forthcoming paper with the same title, we will treat the case 0< |q|_p ≤ |θ -1|_p < 1 for all prime numbers p.
Ergodicity of a singly-thermostated harmonic oscillator
NASA Astrophysics Data System (ADS)
Hoover, William Graham; Sprott, Julien Clinton; Hoover, Carol Griswold
2016-03-01
Although Nosé's thermostated mechanics is formally consistent with Gibbs' canonical ensemble, the thermostated Nosé-Hoover (harmonic) oscillator, with its mean kinetic temperature controlled, is far from ergodic. Much of its phase space is occupied by regular conservative tori. Oscillator ergodicity has previously been achieved by controlling two oscillator moments with two thermostat variables. Here we use computerized searches in conjunction with visualization to find singly-thermostated motion equations for the oscillator which are consistent with Gibbs' canonical distribution. Such models are the simplest able to bridge the gap between Gibbs' statistical ensembles and Newtonian single-particle dynamics.
Accurate age estimation in small-scale societies
Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.
2017-01-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282
Narinc, D; Aygun, A; Karaman, E; Aksoy, T
2015-07-01
The objective of the present study was to estimate heritabilities as well as genetic and phenotypic correlations for egg weight, specific gravity, shape index, shell ratio, egg shell strength, egg length, egg width and shell weight in Japanese quail eggs. External egg quality traits were measured on 5864 eggs of 934 female quails from a dam line selected for two generations. Within the Bayesian framework, using Gibbs Sampling algorithm, a multivariate animal model was applied to estimate heritabilities and genetic correlations for external egg quality traits. The heritability estimates for external egg quality traits were moderate to high and ranged from 0.29 to 0.81. The heritability estimates for egg and shell weight of 0.81 and 0.76 were fairly high. The genetic and phenotypic correlations between egg shell strength with specific gravity, shell ratio and shell weight ranging from 0.55 to 0.79 were relatively high. It can be concluded that it is possible to determine egg shell quality using the egg specific gravity values utilizing its high heritability and fairly high positive correlation with most of the egg shell quality traits. As a result, egg specific gravity may be the choice of selection criterion rather than other external egg traits for genetic improvement of egg shell quality in Japanese quails.
NASA Astrophysics Data System (ADS)
Soundararajan, Venky; Aravamudan, Murali
2014-12-01
The efficacy and mechanisms of therapeutic action are largely described by atomic bonds and interactions local to drug binding sites. Here we introduce global connectivity analysis as a high-throughput computational assay of therapeutic action - inspired by the Google page rank algorithm that unearths most ``globally connected'' websites from the information-dense world wide web (WWW). We execute short timescale (30 ps) molecular dynamics simulations with high sampling frequency (0.01 ps), to identify amino acid residue hubs whose global connectivity dynamics are characteristic of the ligand or mutation associated with the target protein. We find that unexpected allosteric hubs - up to 20Å from the ATP binding site, but within 5Å of the phosphorylation site - encode the Gibbs free energy of inhibition (ΔGinhibition) for select protein kinase-targeted cancer therapeutics. We further find that clinically relevant somatic cancer mutations implicated in both drug resistance and personalized drug sensitivity can be predicted in a high-throughput fashion. Our results establish global connectivity analysis as a potent assay of protein functional modulation. This sets the stage for unearthing disease-causal exome mutations and motivates forecast of clinical drug response on a patient-by-patient basis. We suggest incorporation of structure-guided genetic inference assays into pharmaceutical and healthcare Oncology workflows.
Accurate age estimation in small-scale societies.
Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G
2017-08-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.
Discriminative Relational Topic Models.
Chen, Ning; Zhu, Jun; Xia, Fei; Zhang, Bo
2015-05-01
Relational topic models (RTMs) provide a probabilistic generative process to describe both the link structure and document contents for document networks, and they have shown promise on predicting network structures and discovering latent topic representations. However, existing RTMs have limitations in both the restricted model expressiveness and incapability of dealing with imbalanced network data. To expand the scope and improve the inference accuracy of RTMs, this paper presents three extensions: 1) unlike the common link likelihood with a diagonal weight matrix that allows the-same-topic interactions only, we generalize it to use a full weight matrix that captures all pairwise topic interactions and is applicable to asymmetric networks; 2) instead of doing standard Bayesian inference, we perform regularized Bayesian inference (RegBayes) with a regularization parameter to deal with the imbalanced link structure issue in real networks and improve the discriminative ability of learned latent representations; and 3) instead of doing variational approximation with strict mean-field assumptions, we present collapsed Gibbs sampling algorithms for the generalized relational topic models by exploring data augmentation without making restricting assumptions. Under the generic RegBayes framework, we carefully investigate two popular discriminative loss functions, namely, the logistic log-loss and the max-margin hinge loss. Experimental results on several real network datasets demonstrate the significance of these extensions on improving prediction performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw
2014-02-15
We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less
Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution
NASA Astrophysics Data System (ADS)
Wirawati, Ika; Iriawan, Nur; Irhamah
2017-06-01
Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.
Aggression Replacement Training and Childhood Trauma
ERIC Educational Resources Information Center
Amendola, A. Mark; Oliver, Robert W.
2013-01-01
Aggression Replacement Training (ART) was developed by the late Arnold Goldstein of Syracuse University to teach positive alternatives to children and youth with emotional and behavioral problems (Glick & Gibbs, 2011; Goldstein, Glick, & Gibbs, 1998). ART provides cognitive, affective, and behavioral interventions to build competence in…
A novel procedure on next generation sequencing data analysis using text mining algorithm.
Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen
2016-05-13
Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C
2017-12-10
The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm
NASA Astrophysics Data System (ADS)
Selig, Marco; Enßlin, Torsten A.
2015-02-01
The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74
Feng, Dong-xia; Nguyen, Anh V
2016-03-01
Floating objects on the air-water interfaces are central to a number of everyday activities, from walking on water by insects to flotation separation of valuable minerals using air bubbles. The available theories show that a fine sphere can float if the force of surface tension and buoyancies can support the sphere at the interface with an apical angle subtended by the circle of contact being larger than the contact angle. Here we show that the pinning of the contact line at the sharp edge, known as the Gibbs inequality condition, also plays a significant role in controlling the stability and detachment of floating spheres. Specifically, we truncated the spheres with different angles and used a force sensor device to measure the force of pushing the truncated spheres from the interface into water. We also developed a theoretical modeling to calculate the pushing force that in combination with experimental results shows different effects of the Gibbs inequality condition on the stability and detachment of the spheres from the water surface. For small angles of truncation, the Gibbs inequality condition does not affect the sphere detachment, and hence the classical theories on the floatability of spheres are valid. For large truncated angles, the Gibbs inequality condition determines the tenacity of the particle-meniscus contact and the stability and detachment of floating spheres. In this case, the classical theories on the floatability of spheres are no longer valid. A critical truncated angle for the transition from the classical to the Gibbs inequality regimes of detachment was also established. The outcomes of this research advance our understanding of the behavior of floating objects, in particular, the flotation separation of valuable minerals, which often contain various sharp edges of their crystal faces.
Minimal parameter solution of the orthogonal matrix differential equation
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Markley, F. Landis
1990-01-01
As demonstrated in this work, all orthogonal matrices solve a first order differential equation. The straightforward solution of this equation requires n sup 2 integrations to obtain the element of the nth order matrix. There are, however, only n(n-1)/2 independent parameters which determine an orthogonal matrix. The questions of choosing them, finding their differential equation and expressing the orthogonal matrix in terms of these parameters are considered. Several possibilities which are based on attitude determination in three dimensions are examined. It is shown that not all 3-D methods have useful extensions to higher dimensions. It is also shown why the rate of change of the matrix elements, which are the elements of the angular rate vector in 3-D, are the elements of a tensor of the second rank (dyadic) in spaces other than three dimensional. It is proven that the 3-D Gibbs vector (or Cayley Parameters) are extendable to other dimensions. An algorithm is developed emplying the resulting parameters, which are termed Extended Rodrigues Parameters, and numerical results are presented of the application of the algorithm to a fourth order matrix.
Minimal parameter solution of the orthogonal matrix differential equation
NASA Technical Reports Server (NTRS)
Baritzhack, Itzhack Y.; Markley, F. Landis
1988-01-01
As demonstrated in this work, all orthogonal matrices solve a first order differential equation. The straightforward solution of this equation requires n sup 2 integrations to obtain the element of the nth order matrix. There are, however, only n(n-1)/2 independent parameters which determine an orthogonal matrix. The questions of choosing them, finding their differential equation and expressing the orthogonal matrix in terms of these parameters are considered. Several possibilities which are based on attitude determination in three dimensions are examined. It is shown that not all 3-D methods have useful extensions to higher dimensions. It is also shown why the rate of change of the matrix elements, which are the elements of the angular rate vector in 3-D, are the elements of a tensor of the second rank (dyadic) in spaces other than three dimensional. It is proven that the 3-D Gibbs vector (or Cayley Parameters) are extendable to other dimensions. An algorithm is developed employing the resulting parameters, which are termed Extended Rodrigues Parameters, and numerical results are presented of the application of the algorithm to a fourth order matrix.
ERIC Educational Resources Information Center
Hanson, Robert M.; Riley, Patrick; Schwinefus, Jeff; Fischer, Paul J.
2008-01-01
The use of qualitative graphs of Gibbs energy versus temperature is described in the context of chemical demonstrations involving phase changes and colligative properties at the general chemistry level. (Contains 5 figures and 1 note.)
Combinatorial Statistics on Trees and Networks
2010-09-29
interaction graph is drawn from the Erdos- Renyi , G(n,p), where each edge is present independently with probability p. For this model we establish a double...special interest is the behavior of Gibbs sampling on the Erdos- Renyi random graph G{n, d/n), where each edge is chosen independently with...which have no counterparts in the coloring setting. Our proof presented here exploits in novel ways the local treelike structure of Erdos- Renyi
NASA Astrophysics Data System (ADS)
Liu, Yue-Lin; Ding, Fang; Luo, G.-N.; Chen, Chang-An
2017-12-01
We have carried out systematic first-principles total energy and vibration spectrum calculations to investigate the finite-temperature H dissolution behaviors in tungsten and molybdenum, which are considered promising candidates for the first wall in nuclear fusion reactors. The temperature effect is considered by the lattice expansion and phonon vibration. We demonstrate that the H Gibbs energy of formation in both tetrahedral and octahedral interstitial positions depends strongly on the temperature. The H Gibbs energy of formation under one atmosphere of pressure increases significantly with increasing temperature. The phonon vibration contribution plays a decisive role in the H Gibbs energy of formation with the increasing temperature. Using the predicted H Gibbs energy of formation, our calculated H concentrations in both metals are about one or two orders of magnitude lower than the experimental data at temperature range from 900 to 2400 K. Such a discrepancy can be reasonably explained by the defect-capturing effect.
NASA Astrophysics Data System (ADS)
Williams, Christopher J.; Moffitt, Christine M.
2003-03-01
An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.
Measuring effective temperatures in a generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Foini, Laura; Gambassi, Andrea; Konik, Robert; Cugliandolo, Leticia F.
2017-05-01
The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. If the system is instead integrable, additional quantities conserved by the dynamics intervene in the description of the stationary state. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. Here we argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions of natural observables, quantities which are accessible in experiments.
NASA Astrophysics Data System (ADS)
Hayata, Tomoya; Hidaka, Yoshimasa; Noumi, Toshifumi; Hongo, Masaru
2015-09-01
We derive relativistic hydrodynamics from quantum field theories by assuming that the density operator is given by a local Gibbs distribution at initial time. We decompose the energy-momentum tensor and particle current into nondissipative and dissipative parts, and analyze their time evolution in detail. Performing the path-integral formulation of the local Gibbs distribution, we microscopically derive the generating functional for the nondissipative hydrodynamics. We also construct a basis to study dissipative corrections. In particular, we derive the first-order dissipative hydrodynamic equations without a choice of frame such as the Landau-Lifshitz or Eckart frame.
Four competing interactions for models with an uncountable set of spin values on a Cayley tree
NASA Astrophysics Data System (ADS)
Rozikov, U. A.; Haydarov, F. H.
2017-06-01
We consider models with four competing interactions ( external field, nearest neighbor, second neighbor, and three neighbors) and an uncountable set [0, 1] of spin values on the Cayley tree of order two. We reduce the problem of describing the splitting Gibbs measures of the model to the problem of analyzing solutions of a nonlinear integral equation and study some particular cases for Ising and Potts models. We also show that periodic Gibbs measures for the given models either are translation invariant or have the period two. We present examples where periodic Gibbs measures with the period two are not unique.
Swimbladder Allometry of Selected Midwater Fish Species
1976-01-05
Gibbs, R. II., Jr., 1971. "Notes on Fishes of the Genus Eustomias ( Stomiatoidei , Melanstomiatidae) in Bermuda Waters, With the Description of...N00140-70-C-0307, Smithsonian Institution. Goodyear, R. H. and R. H. Gibbs, Jr., 1970. "Systematics and Zoogeography of Stomiatoid Fishes of the
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Illustrating Enzyme Inhibition Using Gibbs Energy Profiles
ERIC Educational Resources Information Center
Bearne, Stephen L.
2012-01-01
Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of…
SPERTI Electric Control Building (PER608). Plan, elevations, and details. Gibbs ...
SPERT-I Electric Control Building (PER-608). Plan, elevations, and details. Gibbs and Hill, Inc. 1087-PER-608-S5. Date: August 1956. INEEL index no. 760-0608-00-312-108328 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Priede, Imants G.; Billett, David S. M.; Brierley, Andrew S.; Hoelzel, A. Rus; Inall, Mark; Miller, Peter I.; Cousins, Nicola J.; Shields, Mark A.; Fujii, Toyonobu
2013-12-01
The ECOMAR project investigated photosynthetically-supported life on the North Mid-Atlantic Ridge (MAR) between the Azores and Iceland focussing on the Charlie-Gibbs Fracture Zone area in the vicinity of the sub-polar front where the North Atlantic Current crosses the MAR. Repeat visits were made to four stations at 2500 m depth on the flanks of the MAR in the years 2007-2010; a pair of northern stations at 54°N in cold water north of the sub-polar front and southern stations at 49°N in warmer water influenced by eddies from the North Atlantic Current. At each station an instrumented mooring was deployed with current meters and sediment traps (100 and 1000 m above the sea floor) to sample downward flux of particulate matter. The patterns of water flow, fronts, primary production and export flux in the region were studied by a combination of remote sensing and in situ measurements. Sonar, tow nets and profilers sampled pelagic fauna over the MAR. Swath bathymetry surveys across the ridge revealed sediment-covered flat terraces parallel to the axis of the MAR with intervening steep rocky slopes. Otter trawls, megacores, baited traps and a suite of tools carried by the R.O.V. Isis including push cores, grabs and a suction device collected benthic fauna. Video and photo surveys were also conducted using the SHRIMP towed vehicle and the R.O.V. Isis. Additional surveying and sampling by landers and R.O.V. focussed on the summit of a seamount (48°44‧N, 28°10‧W) on the western crest of the MAR between the two southern stations.
Thermodynamic properties and crystallization kinetics at high liquid undercooling
NASA Technical Reports Server (NTRS)
Fecht, Hans J.
1990-01-01
The heat capacities of liquid and crystalline Au-Pb-Sb alloys in the glass-forming composition range were measured with droplet emulsion and bulk samples. Based on the measured C(sub p) data, the entropy, enthalpy, and Gibbs free energy functions of the eutectic, solid mixture, and undercooled liquid were determined as a function of undercooling and compared with theoretical predictions. The results indicate an isentropic temperature at 313 + or - 5 K, which agrees well with experimental data for the glass transition. A kinetics analysis of the nucleation undercooling response suggests that the proper choice for the Gibbs free energy change during crystallization is most important in analyzing the nucleation kinetics. By classical nucleation theory, the prefactors obtained, based on a variety of theoretical predictions for the driving force, can differ by six orders of magnitude. If the nucleation rates are extrapolated to high undercooling, the extrapolations based on measured heat capacity data show agreement, whereas the predicted nucleation rates are inconsistent with results from drop tower experiments. The implications for microg experiments are discussed.
Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes
NASA Astrophysics Data System (ADS)
Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew
2018-03-01
We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.
NASA Astrophysics Data System (ADS)
Grazhdan, K. V.; Gamov, G. A.; Dushina, S. V.; Sharnin, V. A.
2012-11-01
Coefficients of the interphase distribution of nicotinic acid are determined in aqueous solution systems of ethanol-hexane and DMSO-hexane at 25.0 ± 0.1°C. They are used to calculate the Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide. The Gibbs energy values for the transfer of the molecular and zwitterionic forms of nicotinic acid are obtained by means of UV spectroscopy. The diametrically opposite effect of the composition of binary solvents on the transfer of the molecular and zwitterionic forms of nicotinic acid is noted.
An inverse problem for Gibbs fields with hard core potential
NASA Astrophysics Data System (ADS)
Koralov, Leonid
2007-05-01
It is well known that for a regular stable potential of pair interaction and a small value of activity one can define the corresponding Gibbs field (a measure on the space of configurations of points in Rd). In this paper we consider a converse problem. Namely, we show that for a sufficiently small constant ρ¯1 and a sufficiently small function ρ¯2(x), x ∈Rd, that is equal to zero in a neighborhood of the origin, there exist a hard core pair potential and a value of activity such that ρ¯1 is the density and ρ¯2 is the pair correlation function of the corresponding Gibbs field.
Measuring effective temperatures in a generalized Gibbs ensemble
Foini, Laura; Gambassi, Andrea; Konik, Robert; ...
2017-05-11
The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. Additional quantities conserved by the dynamics intervene in the description of the stationary state, if the system is instead integrable. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. We argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions ofmore » natural observables, quantities which are accessible in experiments.« less
First-Year University Chemistry Textbooks' Misrepresentation of Gibbs Energy
ERIC Educational Resources Information Center
Quilez, Juan
2012-01-01
This study analyzes the misrepresentation of Gibbs energy by college chemistry textbooks. The article reports the way first-year university chemistry textbooks handle the concepts of spontaneity and equilibrium. Problems with terminology are found; confusion arises in the meaning given to [delta]G, [delta][subscript r]G, [delta]G[degrees], and…
Exploring Fourier Series and Gibbs Phenomenon Using Mathematica
ERIC Educational Resources Information Center
Ghosh, Jonaki B.
2011-01-01
This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…
The entropy and Gibbs free energy of formation of the aluminum ion
Hemingway, B.S.; Robie, R.A.
1977-01-01
A reevaluation of the entropy and Gibbs free energy of formation of Al3+(aq) yields -308 ?? 15 J/K??mol and 489.4 ?? 1.4kj/mol for S0298 and ??G0f{hook},298 respectively. The standard electrode potential for aluminum is 1.691 ?? 0.005 volts. ?? 1977.
Surfactant Adsorption: A Revised Physical Chemistry Lab
ERIC Educational Resources Information Center
Bresler, Marc R.; Hagen, John P.
2008-01-01
Many physical chemistry lab courses include an experiment in which students measure surface tension as a function of surfactant concentration. In the traditional experiment, the data are fit to the Gibbs isotherm to determine the molar area for the surfactant, and the critical micelle concentration is used to calculate the Gibbs energy of micelle…
Gibbs energies of transferring triglycine from water into H2O-DMSO solvent
NASA Astrophysics Data System (ADS)
Usacheva, T. R.; Kuz'mina, K. I.; Lan, Pham Thi; Kuz'mina, I. A.; Sharnin, V. A.
2014-08-01
The Gibbs energies of transferring triglycine (3Gly, glycyl-glycyl-glycine) from water into mixtures of water with dimethyl sulfoxide (χDMSO = 0.05, 0.10, and 0.15 mole fractions) at 298.15 K are determined from the interphase distribution. An increased dimethyl sulfoxide (DMSO) concentration in the solvent slightly raises the positive values of Δtr G ○(3Gly), possibly indicating the formation of more stable 3Gly-H2O solvated complexes than ones of 3Gly-DMSO. It is shown that the change in the Gibbs energy of transfer of 3Gly is determined by the enthalpy component. The relationship of 3Gly and 18-crown-6 ether (18C6) solvation's contributions to the change in the Gibbs energy of [3Gly18C6] molecular complex formation in H2O-DMSO solvents is analyzed, and the key role of 3Gly solvation's contribution to the change in the stability of [3Gly18C6] upon moving from H2O to mixtures with DMSO is revealed.
On thermodynamical inconsistency of isotherm equations: Gibbs's thermodynamics.
Tóth, József
2003-06-01
It has been proven that all isotherm equations which include the expression 1-Theta contradict the exact Gibbs thermodynamics. These contradictions have been discussed in detail in the case of the Langmuir (L) equation applied to gas/solid (G/S), solid/liquid (S/L), and gas/liquid (G/L) interfaces. In G/S adsorption the L equation can theoretically be applied only at low equilibrium pressures on condition that vg > vs . vg is the molar volume of the adsorbed amount in the gas phase and vs is the same in the Gibbs phase. In S/L and G/L adsorption the L equation is practically applicable only in the domain of very low concentrations. The cause of these contradictions (inconsistencies) is that Gibbs thermodynamics takes excess adsorbed amounts into account; however, the L and other isotherm equations calculate with the absolute adsorbed amount. The two amounts may be practically equal to each other when the limiting conditions mentioned above are fulfilled. It is also discussed how these inconsistent isotherm equations can be transformed into consistent ones.
Ebenhöh, Oliver; Spelberg, Stephanie
2018-02-19
The photosynthetic carbon reduction cycle, or Calvin-Benson-Bassham (CBB) cycle, is now contained in every standard biochemistry textbook. Although the cycle was already proposed in 1954, it is still the subject of intense research, and even the structure of the cycle, i.e. the exact series of reactions, is still under debate. The controversy about the cycle's structure was fuelled by the findings of Gibbs and Kandler in 1956 and 1957, when they observed that radioactive 14 CO 2 was dynamically incorporated in hexoses in a very atypical and asymmetrical way, a phenomenon later termed the 'photosynthetic Gibbs effect'. Now, it is widely accepted that the photosynthetic Gibbs effect is not in contradiction to the reaction scheme proposed by CBB, but the arguments given have been largely qualitative and hand-waving. To fully appreciate the controversy and to understand the difficulties in interpreting the Gibbs effect, it is illustrative to illuminate the history of the discovery of the CBB cycle. We here give an account of central scientific advances and discoveries, which were essential prerequisites for the elucidation of the cycle. Placing the historic discoveries in the context of the modern textbook pathway scheme illustrates the complexity of the cycle and demonstrates why especially dynamic labelling experiments are far from easy to interpret. We conclude by arguing that it requires sound theoretical approaches to resolve conflicting interpretations and to provide consistent quantitative explanations. © 2018 The Author(s).
Representation of complex probabilities and complex Gibbs sampling
NASA Astrophysics Data System (ADS)
Salcedo, Lorenzo Luis
2018-03-01
Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.
The chemical (not mechanical) paradigm of thermodynamics of colloid and interface science.
Kaptay, George
2018-06-01
In the most influential monograph on colloid and interfacial science by Adamson three fundamental equations of "physical chemistry of surfaces" are identified: the Laplace equation, the Kelvin equation and the Gibbs adsorption equation, with a mechanical definition of surface tension by Young as a starting point. Three of them (Young, Laplace and Kelvin) are called here the "mechanical paradigm". In contrary it is shown here that there is only one fundamental equation of the thermodynamics of colloid and interface science and all the above (and other) equations of this field follow as its derivatives. This equation is due to chemical thermodynamics of Gibbs, called here the "chemical paradigm", leading to the definition of surface tension and to 5 rows of equations (see Graphical abstract). The first row is the general equation for interfacial forces, leading to the Young equation, to the Bakker equation and to the Laplace equation, etc. Although the principally wrong extension of the Laplace equation formally leads to the Kelvin equation, using the chemical paradigm it becomes clear that the Kelvin equation is generally incorrect, although it provides right results in special cases. The second row of equations provides equilibrium shapes and positions of phases, including sessile drops of Young, crystals of Wulff, liquids in capillaries, etc. The third row of equations leads to the size-dependent equations of molar Gibbs energies of nano-phases and chemical potentials of their components; from here the corrected versions of the Kelvin equation and its derivatives (the Gibbs-Thomson equation and the Freundlich-Ostwald equation) are derived, including equations for more complex problems. The fourth row of equations is the nucleation theory of Gibbs, also contradicting the Kelvin equation. The fifth row of equations is the adsorption equation of Gibbs, and also the definition of the partial surface tension, leading to the Butler equation and to its derivatives, including the Langmuir equation and the Szyszkowski equation. Positioning the single fundamental equation of Gibbs into the thermodynamic origin of colloid and interface science leads to a coherent set of correct equations of this field. The same provides the chemical (not mechanical) foundation of the chemical (not mechanical) discipline of colloid and interface science. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
A Generalized Deduction of the Ideal-Solution Model
ERIC Educational Resources Information Center
Leo, Teresa J.; Perez-del-Notario, Pedro; Raso, Miguel A.
2006-01-01
A new general procedure for deriving the Gibbs energy of mixing is developed through general thermodynamic considerations, and the ideal-solution model is obtained as a special particular case of the general one. The deduction of the Gibbs energy of mixing for the ideal-solution model is a rational one and viewed suitable for advanced students who…
Experimental Pragmatics and What Is Said: A Response to Gibbs and Moise.
ERIC Educational Resources Information Center
Nicolle, Steve; Clark, Billy
1999-01-01
Attempted replication of Gibbs and Moise (1997) experiments regarding the recognition of a distinction between what is said and what is implicated. Results showed that, under certain conditions, subject selected implicatures when asked to select the paraphrase best reflecting what a speaker has said. Suggests that results can be explained with the…
Lindeberg theorem for Gibbs-Markov dynamics
NASA Astrophysics Data System (ADS)
Denker, Manfred; Senti, Samuel; Zhang, Xuan
2017-12-01
A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piro, M. H. A.; Simunovic, S.
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
Soundararajan, Venky; Aravamudan, Murali
2014-01-01
The efficacy and mechanisms of therapeutic action are largely described by atomic bonds and interactions local to drug binding sites. Here we introduce global connectivity analysis as a high-throughput computational assay of therapeutic action – inspired by the Google page rank algorithm that unearths most “globally connected” websites from the information-dense world wide web (WWW). We execute short timescale (30 ps) molecular dynamics simulations with high sampling frequency (0.01 ps), to identify amino acid residue hubs whose global connectivity dynamics are characteristic of the ligand or mutation associated with the target protein. We find that unexpected allosteric hubs – up to 20Å from the ATP binding site, but within 5Å of the phosphorylation site – encode the Gibbs free energy of inhibition (ΔGinhibition) for select protein kinase-targeted cancer therapeutics. We further find that clinically relevant somatic cancer mutations implicated in both drug resistance and personalized drug sensitivity can be predicted in a high-throughput fashion. Our results establish global connectivity analysis as a potent assay of protein functional modulation. This sets the stage for unearthing disease-causal exome mutations and motivates forecast of clinical drug response on a patient-by-patient basis. We suggest incorporation of structure-guided genetic inference assays into pharmaceutical and healthcare Oncology workflows. PMID:25465236
Q-space truncation and sampling in diffusion spectrum imaging.
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L; McNab, Jennifer A
2016-12-01
To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). DSI data were acquired using the MGH-USC connectome scanner (G max = 300 mT/m) with b max = 30,000 s/mm 2 , 17 × 17 × 17, 15 × 15 × 15 and 11 × 11 × 11 grids in ex vivo human brains and b max = 10,000 s/mm 2 , 11 × 11 × 11 grid in vivo. An additional in vivo scan using b max =7,000 s/mm 2 , 11 × 11 × 11 grid was performed with a derated gradient strength of 40 mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11 × 11 × 11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. Magn Reson Med 76:1750-1763, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
The Full Monte Carlo: A Live Performance with Stars
NASA Astrophysics Data System (ADS)
Meng, Xiao-Li
2014-06-01
Markov chain Monte Carlo (MCMC) is being applied increasingly often in modern Astrostatistics. It is indeed incredibly powerful, but also very dangerous. It is popular because of its apparent generality (from simple to highly complex problems) and simplicity (the availability of out-of-the-box recipes). It is dangerous because it always produces something but there is no surefire way to verify or even diagnosis that the “something” is remotely close to what the MCMC theory predicts or one hopes. Using very simple models (e.g., conditionally Gaussian), this talk starts with a tutorial of the two most popular MCMC algorithms, namely, the Gibbs Sampler and the Metropolis-Hasting Algorithm, and illustratestheir good, bad, and ugly implementations via live demonstration. The talk ends with a story of how a recent advance, the Ancillary-Sufficient Interweaving Strategy (ASIS) (Yu and Meng, 2011, http://www.stat.harvard.edu/Faculty_Content/meng/jcgs.2011-article.pdf)reduces the danger. It was discovered almost by accident during a Ph.D. student’s (Yaming Yu) struggle with fitting a Cox process model for detecting changes in source intensity of photon counts observed by the Chandra X-ray telescope from a (candidate) neutron/quark star.
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
NASA Astrophysics Data System (ADS)
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems
ERIC Educational Resources Information Center
Smith, Norman O.
2004-01-01
An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…
Just Another Gibbs Sampler (JAGS): Flexible Software for MCMC Implementation
ERIC Educational Resources Information Center
Depaoli, Sarah; Clifton, James P.; Cobb, Patrice R.
2016-01-01
A review of the software Just Another Gibbs Sampler (JAGS) is provided. We cover aspects related to history and development and the elements a user needs to know to get started with the program, including (a) definition of the data, (b) definition of the model, (c) compilation of the model, and (d) initialization of the model. An example using a…
Gibbs Ensemble Simulations of the Solvent Swelling of Polymer Films
NASA Astrophysics Data System (ADS)
Gartner, Thomas; Epps, Thomas, III; Jayaraman, Arthi
Solvent vapor annealing (SVA) is a useful technique to tune the morphology of block polymer, polymer blend, and polymer nanocomposite films. Despite SVA's utility, standardized SVA protocols have not been established, partly due to a lack of fundamental knowledge regarding the interplay between the polymer(s), solvent, substrate, and free-surface during solvent annealing and evaporation. An understanding of how to tune polymer film properties in a controllable manner through SVA processes is needed. Herein, the thermodynamic implications of the presence of solvent in the swollen polymer film is explored through two alternative Gibbs ensemble simulation methods that we have developed and extended: Gibbs ensemble molecular dynamics (GEMD) and hybrid Monte Carlo (MC)/molecular dynamics (MD). In this poster, we will describe these simulation methods and demonstrate their application to polystyrene films swollen by toluene and n-hexane. Polymer film swelling experiments, Gibbs ensemble molecular simulations, and polymer reference interaction site model (PRISM) theory are combined to calculate an effective Flory-Huggins χ (χeff) for polymer-solvent mixtures. The effects of solvent chemistry, solvent content, polymer molecular weight, and polymer architecture on χeff are examined, providing a platform to control and understand the thermodynamics of polymer film swelling.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Using reweighting and free energy surface interpolation to predict solid-solid phase diagrams
NASA Astrophysics Data System (ADS)
Schieber, Natalie P.; Dybeck, Eric C.; Shirts, Michael R.
2018-04-01
Many physical properties of small organic molecules are dependent on the current crystal packing, or polymorph, of the material, including bioavailability of pharmaceuticals, optical properties of dyes, and charge transport properties of semiconductors. Predicting the most stable crystalline form at a given temperature and pressure requires determining the crystalline form with the lowest relative Gibbs free energy. Effective computational prediction of the most stable polymorph could save significant time and effort in the design of novel molecular crystalline solids or predict their behavior under new conditions. In this study, we introduce a new approach using multistate reweighting to address the problem of determining solid-solid phase diagrams and apply this approach to the phase diagram of solid benzene. For this approach, we perform sampling at a selection of temperature and pressure states in the region of interest. We use multistate reweighting methods to determine the reduced free energy differences between T and P states within a given polymorph and validate this phase diagram using several measures. The relative stability of the polymorphs at the sampled states can be successively interpolated from these points to create the phase diagram by combining these reduced free energy differences with a reference Gibbs free energy difference between polymorphs. The method also allows for straightforward estimation of uncertainties in the phase boundary. We also find that when properly implemented, multistate reweighting for phase diagram determination scales better with the size of the system than previously estimated.
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
NASA Astrophysics Data System (ADS)
Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.
2018-05-01
The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.
Discovery of a young asteroid cluster associated with P/2012 F5 (Gibbs)
NASA Astrophysics Data System (ADS)
Novaković, Bojan; Hsieh, Henry H.; Cellino, Alberto; Micheli, Marco; Pedani, Marco
2014-03-01
We present the results of our search for a dynamical family around the active Asteroid P/2012 F5 (Gibbs). By applying the hierarchical clustering method, we discover an extremely compact 9-body cluster associated with P/2012 F5. The statistical significance of this newly discovered Gibbs cluster is estimated to be >99.9%, strongly suggesting that its members share a common origin. The cluster is located in a dynamically cold region of the outer main-belt at a proper semi-major axis of ∼3.005 AU, and all members are found to be dynamically stable over very long timescales. Backward numerical orbital integrations show that the age of the cluster is only 1.5 ± 0.1 Myr. Taxonomic classifications are unavailable for most of the cluster members, but SDSS spectrophotometry available for two cluster members indicate that both appear to be Q-type objects. We also estimate a lower limit of the size of the parent body to be about 10 km, and find that the impact event which produced the Gibbs cluster is intermediate between a cratering and a catastrophic collision. In addition, we search for new main-belt comets in the region of the Gibbs cluster by observing seven asteroids either belonging to the cluster, or being very close in the space of orbital proper elements. However, we do not detect any convincing evidence of the presence of a tail or coma in any our targets. Finally, we obtain optical images of P/2012 F5, and find absolute R-band and V-band magnitudes of HR = 17.0 ± 0.1 mag and HV = 17.4 ± 0.1 mag, respectively, corresponding to an upper limit on the diameter of the P/2012 F5 nucleus of ∼2 km.
Investigating homology between proteins using energetic profiles.
Wrabl, James O; Hilser, Vincent J
2010-03-26
Accumulated experimental observations demonstrate that protein stability is often preserved upon conservative point mutation. In contrast, less is known about the effects of large sequence or structure changes on the stability of a particular fold. Almost completely unknown is the degree to which stability of different regions of a protein is generally preserved throughout evolution. In this work, these questions are addressed through thermodynamic analysis of a large representative sample of protein fold space based on remote, yet accepted, homology. More than 3,000 proteins were computationally analyzed using the structural-thermodynamic algorithm COREX/BEST. Estimated position-specific stability (i.e., local Gibbs free energy of folding) and its component enthalpy and entropy were quantitatively compared between all proteins in the sample according to all-vs.-all pairwise structural alignment. It was discovered that the local stabilities of homologous pairs were significantly more correlated than those of non-homologous pairs, indicating that local stability was indeed generally conserved throughout evolution. However, the position-specific enthalpy and entropy underlying stability were less correlated, suggesting that the overall regional stability of a protein was more important than the thermodynamic mechanism utilized to achieve that stability. Finally, two different types of statistically exceptional evolutionary structure-thermodynamic relationships were noted. First, many homologous proteins contained regions of similar thermodynamics despite localized structure change, suggesting a thermodynamic mechanism enabling evolutionary fold change. Second, some homologous proteins with extremely similar structures nonetheless exhibited different local stabilities, a phenomenon previously observed experimentally in this laboratory. These two observations, in conjunction with the principal conclusion that homologous proteins generally conserved local stability, may provide guidance for a future thermodynamically informed classification of protein homology.
NASA Astrophysics Data System (ADS)
Vanden-Eijnden, Eric; Venturoli, Maddalena
2009-05-01
An improved and simplified version of the finite temperature string (FTS) method [W. E, W. Ren, and E. Vanden-Eijnden, J. Phys. Chem. B 109, 6688 (2005)] is proposed. Like the original approach, the new method is a scheme to calculate the principal curves associated with the Boltzmann-Gibbs probability distribution of the system, i.e., the curves which are such that their intersection with the hyperplanes perpendicular to themselves coincides with the expected position of the system in these planes (where perpendicular is understood with respect to the appropriate metric). Unlike more standard paths such as the minimum energy path or the minimum free energy path, the location of the principal curve depends on global features of the energy or the free energy landscapes and thereby may remain appropriate in situations where the landscape is rough on the thermal energy scale and/or entropic effects related to the width of the reaction channels matter. Instead of using constrained sampling in hyperplanes as in the original FTS, the new method calculates the principal curve via sampling in the Voronoi tessellation whose generating points are the discretization points along this curve. As shown here, this modification results in greater algorithmic simplicity. As a by-product, it also gives the free energy associated with the Voronoi tessellation. The new method can be applied both in the original Cartesian space of the system or in a set of collective variables. We illustrate FTS on test-case examples and apply it to the study of conformational transitions of the nitrogen regulatory protein C receiver domain using an elastic network model and to the isomerization of solvated alanine dipeptide.
Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence
2010-11-09
Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
NASA Technical Reports Server (NTRS)
Isham, M. A.
1992-01-01
Silicon carbide and silicon nitride are considered for application as structural materials and coating in advanced propulsion systems including nuclear thermal. Three-dimensional Gibbs free energy were constructed for reactions involving these materials in H2 and H2/H2O. Free energy plots are functions of temperature and pressure. Calculations used the definition of Gibbs free energy where the spontaneity of reactions is calculated as a function of temperature and pressure. Silicon carbide decomposes to Si and CH4 in pure H2 and forms a SiO2 scale in a wet atmosphere. Silicon nitride remains stable under all conditions. There was no apparent difference in reaction thermodynamics between ideal and Van der Waals treatment of gaseous species.
Plastino, A; Rocca, M C
2017-06-01
Appealing to the 1902 Gibbs formalism for classical statistical mechanics (SM)-the first SM axiomatic theory ever that successfully explained equilibrium thermodynamics-we show that already at the classical level there is a strong correlation between Renyi's exponent α and the number of particles for very simple systems. No reference to heat baths is needed for such a purpose.
GibbsCluster: unsupervised clustering and alignment of peptide sequences.
Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten
2017-07-03
Receptor interactions with short linear peptide fragments (ligands) are at the base of many biological signaling processes. Conserved and information-rich amino acid patterns, commonly called sequence motifs, shape and regulate these interactions. Because of the properties of a receptor-ligand system or of the assay used to interrogate it, experimental data often contain multiple sequence motifs. GibbsCluster is a powerful tool for unsupervised motif discovery because it can simultaneously cluster and align peptide data. The GibbsCluster 2.0 presented here is an improved version incorporating insertion and deletions accounting for variations in motif length in the peptide input. In basic terms, the program takes as input a set of peptide sequences and clusters them into meaningful groups. It returns the optimal number of clusters it identified, together with the sequence alignment and sequence motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Consistent Estimation of Gibbs Energy Using Component Contributions
Milo, Ron; Fleming, Ronan M. T.
2013-01-01
Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism. PMID:23874165
Density-functional theory computer simulations of CZTS0.25Se0.75 alloy phase diagrams
NASA Astrophysics Data System (ADS)
Chagarov, E.; Sardashti, K.; Haight, R.; Mitzi, D. B.; Kummel, A. C.
2016-08-01
Density-functional theory simulations of CZTS, CZTSe, and CZTS0.25Se0.75 photovoltaic compounds have been performed to investigate the stability of the CZTS0.25Se0.75 alloy vs. decomposition into CZTS, CZTSe, and other secondary compounds. The Gibbs energy for vibrational contributions was estimated by calculating phonon spectra and thermodynamic properties at finite temperatures. It was demonstrated that the CZTS0.25Se0.75 alloy is stabilized not by enthalpy of formation but primarily by the mixing contributions to the Gibbs energy. The Gibbs energy gains/losses for several decomposition reactions were calculated as a function of temperature with/without intermixing and vibration contributions to the Gibbs energy. A set of phase diagrams was built in the multidimensional space of chemical potentials at 300 K and 900 K temperatures to demonstrate alloy stability and boundary compounds at various chemical conditions. It demonstrated for CZTS0.25Se0.75 that the chemical potentials for stability differ between typical processing temperature (˜900 K) and operating temperature (300 K). This implies that as cooling progresses, the flux/concentration of S should be increased in MBE growth to maintain the CZTS0.25Se0.75 in a thermodynamically stable state to minimize phase decomposition.
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.
2008-01-01
SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287
Noise reduction for low-dose helical CT by 3D penalized weighted least-squares sinogram smoothing
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Helical computed tomography (HCT) has several advantages over conventional step-and-shoot CT for imaging a relatively large object, especially for dynamic studies. However, HCT may increase X-ray exposure significantly to the patient. This work aims to reduce the radiation by lowering the X-ray tube current (mA) and filtering the low-mA (or dose) sinogram noise. Based on the noise properties of HCT sinogram, a three-dimensional (3D) penalized weighted least-squares (PWLS) objective function was constructed and an optimal sinogram was estimated by minimizing the objective function. To consider the difference of signal correlation among different direction of the HCT sinogram, an anisotropic Markov random filed (MRF) Gibbs function was designed as the penalty. The minimization of the objection function was performed by iterative Gauss-Seidel updating strategy. The effectiveness of the 3D-PWLS sinogram smoothing for low-dose HCT was demonstrated by a 3D Shepp-Logan head phantom study. Comparison studies with our previously developed KL domain PWLS sinogram smoothing algorithm indicate that the KL+2D-PWLS algorithm shows better performance on in-plane noise-resolution trade-off while the 3D-PLWS shows better performance on z-axis noise-resolution trade-off. Receiver operating characteristic (ROC) studies by using channelized Hotelling observer (CHO) shows that 3D-PWLS and KL+2DPWLS algorithms have similar performance on detectability in low-contrast environment.
Discovering Sequence Motifs with Arbitrary Insertions and Deletions
Frith, Martin C.; Saunders, Neil F. W.; Kobe, Bostjan; Bailey, Timothy L.
2008-01-01
Biology is encoded in molecular sequences: deciphering this encoding remains a grand scientific challenge. Functional regions of DNA, RNA, and protein sequences often exhibit characteristic but subtle motifs; thus, computational discovery of motifs in sequences is a fundamental and much-studied problem. However, most current algorithms do not allow for insertions or deletions (indels) within motifs, and the few that do have other limitations. We present a method, GLAM2 (Gapped Local Alignment of Motifs), for discovering motifs allowing indels in a fully general manner, and a companion method GLAM2SCAN for searching sequence databases using such motifs. glam2 is a generalization of the gapless Gibbs sampling algorithm. It re-discovers variable-width protein motifs from the PROSITE database significantly more accurately than the alternative methods PRATT and SAM-T2K. Furthermore, it usefully refines protein motifs from the ELM database: in some cases, the refined motifs make orders of magnitude fewer overpredictions than the original ELM regular expressions. GLAM2 performs respectably on the BAliBASE multiple alignment benchmark, and may be superior to leading multiple alignment methods for “motif-like” alignments with N- and C-terminal extensions. Finally, we demonstrate the use of GLAM2 to discover protein kinase substrate motifs and a gapped DNA motif for the LIM-only transcriptional regulatory complex: using GLAM2SCAN, we identify promising targets for the latter. GLAM2 is especially promising for short protein motifs, and it should improve our ability to identify the protein cleavage sites, interaction sites, post-translational modification attachment sites, etc., that underlie much of biology. It may be equally useful for arbitrarily gapped motifs in DNA and RNA, although fewer examples of such motifs are known at present. GLAM2 is public domain software, available for download at http://bioinformatics.org.au/glam2. PMID:18437229
Armstrong, Ian S; Hoffmann, Sandra A
2016-11-01
The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.
BiomeNet: A Bayesian Model for Inference of Metabolic Divergence among Microbial Communities
Chipman, Hugh; Gu, Hong; Bielawski, Joseph P.
2014-01-01
Metagenomics yields enormous numbers of microbial sequences that can be assigned a metabolic function. Using such data to infer community-level metabolic divergence is hindered by the lack of a suitable statistical framework. Here, we describe a novel hierarchical Bayesian model, called BiomeNet (Bayesian inference of metabolic networks), for inferring differential prevalence of metabolic subnetworks among microbial communities. To infer the structure of community-level metabolic interactions, BiomeNet applies a mixed-membership modelling framework to enzyme abundance information. The basic idea is that the mixture components of the model (metabolic reactions, subnetworks, and networks) are shared across all groups (microbiome samples), but the mixture proportions vary from group to group. Through this framework, the model can capture nested structures within the data. BiomeNet is unique in modeling each metagenome sample as a mixture of complex metabolic systems (metabosystems). The metabosystems are composed of mixtures of tightly connected metabolic subnetworks. BiomeNet differs from other unsupervised methods by allowing researchers to discriminate groups of samples through the metabolic patterns it discovers in the data, and by providing a framework for interpreting them. We describe a collapsed Gibbs sampler for inference of the mixture weights under BiomeNet, and we use simulation to validate the inference algorithm. Application of BiomeNet to human gut metagenomes revealed a metabosystem with greater prevalence among inflammatory bowel disease (IBD) patients. Based on the discriminatory subnetworks for this metabosystem, we inferred that the community is likely to be closely associated with the human gut epithelium, resistant to dietary interventions, and interfere with human uptake of an antioxidant connected to IBD. Because this metabosystem has a greater capacity to exploit host-associated glycans, we speculate that IBD-associated communities might arise from opportunist growth of bacteria that can circumvent the host's nutrient-based mechanism for bacterial partner selection. PMID:25412107
NASA Astrophysics Data System (ADS)
Björnbom, Pehr
2016-03-01
In the first part of this work equilibrium temperature profiles in fluid columns with ideal gas or ideal liquid were obtained by numerically minimizing the column energy at constant entropy, equivalent to maximizing column entropy at constant energy. A minimum in internal plus potential energy for an isothermal temperature profile was obtained in line with Gibbs' classical equilibrium criterion. However, a minimum in internal energy alone for adiabatic temperature profiles was also obtained. This led to a hypothesis that the adiabatic lapse rate corresponds to a restricted equilibrium state, a type of state in fact discussed already by Gibbs. In this paper similar numerical results for a fluid column with saturated air suggest that also the saturated adiabatic lapse rate corresponds to a restricted equilibrium state. The proposed hypothesis is further discussed and amended based on the previous and the present numerical results and a theoretical analysis based on Gibbs' equilibrium theory.
A Gibbs point field model for the spatial pattern of coronary capillaries
NASA Astrophysics Data System (ADS)
Karch, R.; Neumann, M.; Neumann, F.; Ullrich, R.; Neumüller, J.; Schreiner, W.
2006-09-01
We propose a Gibbs point field model for the pattern of coronary capillaries in transverse histologic sections from human hearts, based on the physiology of oxygen supply from capillaries to tissue. To specify the potential energy function of the Gibbs point field, we draw on an analogy between the equation of steady-state oxygen diffusion from an array of parallel capillaries to the surrounding tissue and Poisson's equation for the electrostatic potential of a two-dimensional distribution of identical point charges. The influence of factors other than diffusion is treated as a thermal disturbance. On this basis, we arrive at the well-known two-dimensional one-component plasma, a system of identical point charges exhibiting a weak (logarithmic) repulsive interaction that is completely characterized by a single dimensionless parameter. By variation of this parameter, the model is able to reproduce many characteristics of real capillary patterns.
Heritability of hypothyroidism in the Finnish Hovawart population.
Åhlgren, Johanna; Uimari, Pekka
2016-06-07
The Hovawart is a working and companion dog breed of German origin. A few hundred Hovawart dogs are registered annually in Finland. The most common disease with a proposed genetic background in Hovawarts is hypothyroidism. The disease is usually caused by lymphocytic thyroiditis, an autoimmune disorder which destroys the thyroid gland. Hypothyroidism can be treated medically with hormone replacement. Its overall incidence could also be reduced through selection, provided that the trait shows an adequate genetic basis. The aim of this study was to estimate the heritability of hypothyroidism in the Finnish Hovawart population. The pedigree data for the study were provided by the Finnish Kennel Club and the hypothyroidism data by the Finnish Hovawart Club. The data included 4953 dogs born between 1990 and 2010, of which 107 had hypothyroidism and 4846 were unaffected. Prior to the estimation of heritability, we studied the effects of gender, birth year, birth month, and inbreeding on susceptibility to hypothyroidism. Heritability was estimated with the probit model both via restricted maximum likelihood (REML) and Gibbs sampling, using litter and sire of the dog as random effects. None of the studied systematic effects or level of inbreeding had a significant effect on susceptibility to hypothyroidism. The estimated heritability of hypothyroidism varied from 0.47 (SE = 0.18) using REML to 0.62 (SD = 0.21) using Gibbs sampling. Based on our analysis, the heritability of hypothyroidism is moderate to high, suggesting that its prevalence could be decreased through selection. Thus, breeders should notify the breed association of any affected dogs, and their use for breeding should be avoided.
NASA Astrophysics Data System (ADS)
Irawan, R.; Yong, B.; Kristiani, F.
2017-02-01
Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
NASA Astrophysics Data System (ADS)
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Xu, Hui; Li, Pei Xun; Ma, Kun; Thomas, Robert K; Penfold, Jeffrey; Lu, Jian Ren
2013-07-30
This is a second paper responding to recent papers by Menger et al. and the ensuing discussion about the application of the Gibbs equation to surface tension (ST) data. Using new neutron reflection (NR) measurements on sodium dodecylsulfate (SDS) and sodium dodecylmonooxyethylene sulfate (SLES) above and below their CMCs and with and without added NaCl, in conjunction with the previous ST measurements on SDS by Elworthy and Mysels (EM), we conclude that (i) ST measurements are often seriously compromised by traces of divalent ions, (ii) adsorption does not generally reach saturation at the CMC, making it difficult to obtain the limiting Gibbs slope, and (iii) the significant width of micellization may make it impossible to apply the Gibbs equation in a significant range of concentration below the CMC. Menger et al. proposed ii as a reason for the difficulty of applying the Gibbs equation to ST data. Conclusions i and iii now further emphasize the failings of the ST-Gibbs analysis for determining the limiting coverage at the CMC, especially for SDS. For SDS, adsorption increases above the CMC to a value of 10 × CMC, which is about 25% greater than at the CMC and about the same as at the CMC in the presence of 0.1 M NaCl. In contrast, the adsorption of SLES reaches a limit at the CMC with no further increase up to 10 × CMC, but the addition of 0.1 M NaCl increases the surface excess by 20-25%. The results for SDS are combined with earlier NR results to generate an adsorption isotherm from 2 to 100 mM. The NR results for SDS are compared to the definitive surface tension (ST) measurements of EM, and the surface excesses agree over the range where they can safely be compared, from 2 to 6 mM. This confirms that the anomalous decrease in the slope of EM's σ - ln c curve between 6 mM and the CMC at 8.2 mM results from changes in activity associated with a significant width of micellization. This anomaly shows that it is impossible to apply the Gibbs equation usefully from 6 to 8.2 mM (i.e., the lack of knowledge of the activity in this range is the same as above the CMC (8.2 mM)). It was found that a mislabeling of the original data in EM may have prevented the use of this excellent ST data as a standard by other authors. Although NR and ST results for SDS in the absence of added electrolyte show that the discrepancies can be rationalized, ST is generally shown to be less accurate and more vulnerable to impurities, especially divalent ions, than NR. The radiotracer technique is shown to be less accurate than ST-Gibbs in that the four radiotracer measurements of the surface excess are consistent neither with each other nor with ST and NR. It is also shown that radiotracer results on aerosol-OT are likely to be incorrect. Application of the mass action (MA) model of micellization to the ST curves of SDS and SLES through and above the CMC shows that they can be explained by this model and that they depend on the degree of dissociation of the micelle, which leads to a larger change in the mean activity, and hence the adsorption, for the more highly dissociated SDS micelles than for SLES. Previous measurements of the activity of SDS above the CMC were found to be semiquantitatively consistent with the change in mean activity predicted by the MA model but inconsistent with the combined ST, NR, and Gibbs equation results.
Direct measurements of the Gibbs free energy of OH using a CW tunable laser
NASA Technical Reports Server (NTRS)
Killinger, D. K.; Wang, C. C.
1979-01-01
The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.
Generating Natural Language Under Pragmatic Constraints.
1987-03-01
central issue, Carter’s loss. concentrating on ,more, pleasant aspects. But what would happen in an extreme case ’.’ what if you, a Carter supporter. are...In [Cohen 78], Cohen studied the effect of the hearer’s knowledge on the selection of appropriate speech act (say, REQUEST vs INFORM OF WANT...utterances is studied in [Clark & Carlson 81], [Clark & Murphy 82]; [Gibbs 79] and [Gibbs 81] discuss the effects of context on the processing of indirect
Air Force Logistics Command DCS/Materiel Management 1988-9 Master Plan
1988-10-01
MMMAI, AUTOVON 787-2587 Member: Mr James Gibbs, HQ AFLC/MMMES, AUTOVON 787-3407 PROJECT SPONSOR: Mr Steve Stewart, HQ AFLC/MMME, AUTOVON 787-5280 HQ...AFLC OPR: Mr James Gibbs, HQ AFLC/MMMES, AUTOVON 787-3407 PROBLEM STATEMENT: Item managers do not have a procedure to analyze the economic costs and/or...513) 429-0055 Contractor: The Analytic Sciences Corporation (Contact) Mr Rich Mabe , (513) 426-1040 PROJECT SPONSOR: Lt Col Michael Williams, HQ USAF
1988-11-01
rates.6 The Hammet equation , also called the Linear Free Energy Relationship (LFER) because of the relationship of the Gibb’s Free Energy to the... equations for numerous biological and physicochemical properties. Linear Solvation Enery Relationship (LSER), a sub-set of QSAR have been used by...originates from thermodynamics, where Hammet recognized the relationship of structure to the Gibb’s Free Energy, and ultimately to equilibria and reaction
Density-functional theory computer simulations of CZTS{sub 0.25}Se{sub 0.75} alloy phase diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chagarov, E.; Sardashti, K.; Kummel, A. C.
2016-08-14
Density-functional theory simulations of CZTS, CZTSe, and CZTS{sub 0.25}Se{sub 0.75} photovoltaic compounds have been performed to investigate the stability of the CZTS{sub 0.25}Se{sub 0.75} alloy vs. decomposition into CZTS, CZTSe, and other secondary compounds. The Gibbs energy for vibrational contributions was estimated by calculating phonon spectra and thermodynamic properties at finite temperatures. It was demonstrated that the CZTS{sub 0.25}Se{sub 0.75} alloy is stabilized not by enthalpy of formation but primarily by the mixing contributions to the Gibbs energy. The Gibbs energy gains/losses for several decomposition reactions were calculated as a function of temperature with/without intermixing and vibration contributions to themore » Gibbs energy. A set of phase diagrams was built in the multidimensional space of chemical potentials at 300 K and 900 K temperatures to demonstrate alloy stability and boundary compounds at various chemical conditions. It demonstrated for CZTS{sub 0.25}Se{sub 0.75} that the chemical potentials for stability differ between typical processing temperature (∼900 K) and operating temperature (300 K). This implies that as cooling progresses, the flux/concentration of S should be increased in MBE growth to maintain the CZTS{sub 0.25}Se{sub 0.75} in a thermodynamically stable state to minimize phase decomposition.« less
Moriya, Yoshio; Hasegawa, Takeshi; Okada, Tetsuo; Ogawa, Nobuaki; Kawai, Erika; Abe, Kosuke; Ogasawara, Masataka; Kato, Sumio; Nakata, Shinichi
2006-11-15
Gibbs monolayers of lipophilic tetraphenylporphyrinatomanganese(III) and hydrophilic diacid of meso-tetrakis(4-sulfonatopheny)porphyrin adsorbed at the liquid-liquid interface have been analyzed by UV-visible external reflection (ER) and partial internal reflection (PIR) spectra measured at different angles of incidence. The angle-dependent ER and PIR spectra over the Brewster angles (thetaERB and thetaIRB) have readily been measured at the toluene/water interface. As preliminarily expected in our previous study, the present study has first proved that the reflection-absorbance of UV-visible PIR spectra quantitatively agrees with the theoretical calculations for the Gibbs monolayer over thetaIRB. In addition, it has also been proved that the absorbance of the PIR spectra is greatly enhanced in comparison to that of the ATR spectra. The enhancement is caused by an optical effect in the monolayer sandwiched between two phases of toluene and water that have different but refractive indices close to each other. This optical enhancement requires an optically perfect contact between the phases, which is difficult to prepare for a solid-solid contact. At the liquid/liquid interface, however, an ideal optical contact is easily realized, which makes the enhancement as much as the theoretical expectation. The PIR spectrometry will be recognized to be a new high-sensitive analytical tool to study Gibbs monolayer at the liquid/liquid interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Shu-Kun
1996-12-31
Gibbs paradox statement of entropy of mixing has been regarded as the theoretical foundation of statistical mechanics, quantum theory and biophysics. However, all the relevant chemical experimental observations and logical analyses indicate that the Gibbs paradox statement is false. I prove that this statement is wrong: Gibbs paradox statement implies that entropy decreases with the increase in symmetry (as represented by a symmetry number {sigma}; see any statistical mechanics textbook). From group theory any system has at least a symmetry number {sigma}=1 which is the identity operation for a strictly asymmetric system. It follows that the entropy of a systemmore » is equal to, or less than, zero. However, from either von Neumann-Shannon entropy formula (S(w) =-{Sigma}{sup {omega}} in p{sub 1}) or the Boltzmann entropy formula (S = in w) and the original definition, entropy is non-negative. Therefore, this statement is false. It should not be a surprise that for the first time, many outstanding problems such as the validity of Pauling`s resonance theory, the explanation of second order phase transition phenomena, the biophysical problem of protein folding and the related hydrophobic effect, etc., can be solved. Empirical principles such as Pauli principle (and Hund`s rule) and HSAB principle, etc., can also be given a theoretical explanation.« less
NASA Astrophysics Data System (ADS)
Krishna kumar, S.; Logeshkumaran, A.; Magesh, N. S.; Godson, Prince S.; Chandrasekar, N.
2015-12-01
In the present study, the geochemical characteristics of groundwater and drinking water quality has been studied. 24 groundwater samples were collected and analyzed for pH, electrical conductivity, total dissolved solids, carbonate, bicarbonate, chloride, sulphate, nitrate, calcium, magnesium, sodium, potassium and total hardness. The results were evaluated and compared with WHO and BIS water quality standards. The studied results reveal that the groundwater is fresh to brackish and moderately high to hard in nature. Na and Cl are dominant ions among cations and anions. Chloride, calcium and magnesium ions are within the allowable limit except few samples. According to Gibbs diagram, the predominant samples fall in the rock-water interaction dominance and evaporation dominance field. The piper trilinear diagram shows that groundwater samples are Na-Cl and mixed CaMgCl type. Based on the WQI results majority of the samples are falling under excellent to good category and suitable for drinking water purposes.
Biochemical thermodynamics: applications of Mathematica.
Alberty, Robert A
2006-01-01
The most efficient way to store thermodynamic data on enzyme-catalyzed reactions is to use matrices of species properties. Since equilibrium in enzyme-catalyzed reactions is reached at specified pH values, the thermodynamics of the reactions is discussed in terms of transformed thermodynamic properties. These transformed thermodynamic properties are complicated functions of temperature, pH, and ionic strength that can be calculated from the matrices of species values. The most important of these transformed thermodynamic properties is the standard transformed Gibbs energy of formation of a reactant (sum of species). It is the most important because when this function of temperature, pH, and ionic strength is known, all the other standard transformed properties can be calculated by taking partial derivatives. The species database in this package contains data matrices for 199 reactants. For 94 of these reactants, standard enthalpies of formation of species are known, and so standard transformed Gibbs energies, standard transformed enthalpies, standard transformed entropies, and average numbers of hydrogen atoms can be calculated as functions of temperature, pH, and ionic strength. For reactions between these 94 reactants, the changes in these properties can be calculated over a range of temperatures, pHs, and ionic strengths, and so can apparent equilibrium constants. For the other 105 reactants, only standard transformed Gibbs energies of formation and average numbers of hydrogen atoms at 298.15 K can be calculated. The loading of this package provides functions of pH and ionic strength at 298.15 K for standard transformed Gibbs energies of formation and average numbers of hydrogen atoms for 199 reactants. It also provides functions of temperature, pH, and ionic strength for the standard transformed Gibbs energies of formation, standard transformed enthalpies of formation, standard transformed entropies of formation, and average numbers of hydrogen atoms for 94 reactants. Thus loading this package makes available 774 mathematical functions for these properties. These functions can be added and subtracted to obtain changes in these properties in biochemical reactions and apparent equilibrium constants.
Historical and Future Roles of the Tactical Signal Officer
1991-03-27
can soil . LTC James. his sigralmen ind ooat crew on tr,e snic m-4Ca scccmolished both feats while under heavy artillery fi-e rrom tne Soanish on...the capture of Fort Malate to Admiral Dewey’s fleet in Manila Bay. Sergeant Gibbs later became Major General Gibbs, and Chief of Signal in 1928.17...kept critical equipment out of operation required in commmand and control, and degraded the unit’s ability to see the enemy at night. These officers
Solubility and dissolution thermodynamics of tetranitroglycoluril in organic solvents at 295-318 K
NASA Astrophysics Data System (ADS)
Zheng, Zhihua; Wang, Jianlong; Hu, Zhiyan; Du, Hongbin
2017-08-01
The solubility data of tetranitroglycoluril in acetone, methanol, ethanol, ethyl acetate, nitromethane and chloroform at temperatures ranging from 295-318 K were measured by gravimetric method. The solubility data of tetranitroglycoluril were fitted with Apelblat semiempirical equation. The dissolution enthalpy, entropy and Gibbs energy of tetranitroglycoluril were calculated using the Van't Hoff and Gibbs equations. The results showed that the Apelblat semiempirical equation was significantly correlated with solubility data. The dissolving process was endothermic, entropy-driven, and nonspontaneous.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve
1992-01-01
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.
A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound
NASA Astrophysics Data System (ADS)
Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.
2012-10-01
A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
A semi-Lagrangian advection scheme for radioactive tracers in a regional spectral model
NASA Astrophysics Data System (ADS)
Chang, E.-C.; Yoshimura, K.
2015-06-01
In this study, the non-iteration dimensional-split semi-Lagrangian (NDSL) advection scheme is applied to the National Centers for Environmental Prediction (NCEP) regional spectral model (RSM) to alleviate the Gibbs phenomenon. The Gibbs phenomenon is a problem wherein negative values of positive-definite quantities (e.g., moisture and tracers) are generated by the spectral space transformation in a spectral model system. To solve this problem, the spectral prognostic specific humidity and radioactive tracer advection scheme is replaced by the NDSL advection scheme, which considers advection of tracers in a grid system without spectral space transformations. A regional version of the NDSL is developed in this study and is applied to the RSM. Idealized experiments show that the regional version of the NDSL is successful. The model runs for an actual case study suggest that the NDSL can successfully advect radioactive tracers (iodine-131 and cesium-137) without noise from the Gibbs phenomenon. The NDSL can also remove negative specific humidity values produced in spectral calculations without losing detailed features.
Computing the absolute Gibbs free energy in atomistic simulations: Applications to defects in solids
NASA Astrophysics Data System (ADS)
Cheng, Bingqing; Ceriotti, Michele
2018-02-01
The Gibbs free energy is the fundamental thermodynamic potential underlying the relative stability of different states of matter under constant-pressure conditions. However, computing this quantity from atomic-scale simulations is far from trivial, so the potential energy of a system is often used as a proxy. In this paper, we use a combination of thermodynamic integration methods to accurately evaluate the Gibbs free energies associated with defects in crystals, including the vacancy formation energy in bcc iron, and the stacking fault energy in fcc nickel, iron, and cobalt. We quantify the importance of entropic and anharmonic effects in determining the free energies of defects at high temperatures, and show that the potential energy approximation as well as the harmonic approximation may produce inaccurate or even qualitatively wrong results. Our calculations manifest the necessity to employ accurate free energy methods such as thermodynamic integration to estimate the stability of crystallographic defects at high temperatures.
Fractional Stochastic Differential Equations Satisfying Fluctuation-Dissipation Theorem
NASA Astrophysics Data System (ADS)
Li, Lei; Liu, Jian-Guo; Lu, Jianfeng
2017-10-01
We propose in this work a fractional stochastic differential equation (FSDE) model consistent with the over-damped limit of the generalized Langevin equation model. As a result of the `fluctuation-dissipation theorem', the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives, and this FSDE model should be understood in an integral form. We establish the existence of strong solutions for such equations and discuss the ergodicity and convergence to Gibbs measure. In the linear forcing regime, we show rigorously the algebraic convergence to Gibbs measure when the `fluctuation-dissipation theorem' is satisfied, and this verifies that satisfying `fluctuation-dissipation theorem' indeed leads to the correct physical behavior. We further discuss possible approaches to analyze the ergodicity and convergence to Gibbs measure in the nonlinear forcing regime, while leave the rigorous analysis for future works. The FSDE model proposed is suitable for systems in contact with heat bath with power-law kernel and subdiffusion behaviors.
Systematic assignment of thermodynamic constraints in metabolic network models
Kümmel, Anne; Panke, Sven; Heinemann, Matthias
2006-01-01
Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434
Garrido, Nuno M; Jorge, Miguel; Queimada, António J; Gomes, José R B; Economou, Ioannis G; Macedo, Eugénia A
2011-10-14
The Gibbs energy of hydration is an important quantity to understand the molecular behavior in aqueous systems at constant temperature and pressure. In this work we review the performance of some popular force fields, namely TraPPE, OPLS-AA and Gromos, in reproducing the experimental Gibbs energies of hydration of several alkyl-aromatic compounds--benzene, mono-, di- and tri-substituted alkylbenzenes--using molecular simulation techniques. In the second part of the paper, we report a new model that is able to improve such hydration energy predictions, based on Lennard Jones parameters from the recent TraPPE-EH force field and atomic partial charges obtained from natural population analysis of density functional theory calculations. We apply a scaling factor determined by fitting the experimental hydration energy of only two solutes, and then present a simple rule to generate atomic partial charges for different substituted alkyl-aromatics. This rule has the added advantages of eliminating the unnecessary assumption of fixed charge on every substituted carbon atom and providing a simple guideline for extrapolating the charge assignment to any multi-substituted alkyl-aromatic molecule. The point charges derived here yield excellent predictions of experimental Gibbs energies of hydration, with an overall absolute average deviation of less than 0.6 kJ mol(-1). This new parameter set can also give good predictive performance for other thermodynamic properties and liquid structural information.
Stationary wavelet transform for under-sampled MRI reconstruction.
Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M
2014-12-01
In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.
Stolyarova, V L; Lopatin, S I; Shilov, A L; Shugurov, S M
2013-07-15
The unique properties of the PbO-B2O3-SiO2 system, especially its extensive range of glass-forming compositions, make it valuable for various practical applications. The thermodynamic properties and vaporization of PbO-B2O3-SiO2 melts are not well established so far and the data obtained on these will be useful for optimization of technology and thermodynamic modeling of glasses. High-temperature Knudsen effusion mass spectrometry was used to study vaporization processes and to determine the partial pressures of components of the PbO-B2O3-SiO2 melts. Measurements were performed with a MS-1301 mass spectrometer. Vaporization was carried out using two quartz effusion cells containing the sample under study and pure PbO (reference substance). Ions were produced by electron ionization at an energy of 25 eV. To facilitate interpretation of the mass spectra, the appearance energies of ions were also measured. Pb, PbO and O2 were found to be the main vapor species over the samples studied at 1100 K. The PbO activities as a function of the composition of the system were derived from the measured PbO partial pressures. The B2O3 and SiO2 activities, the Gibbs energy of formation, the excess Gibbs energy of formation and mass losses in the samples studied were calculated. Partial pressures of the vapor species over PbO-B2O3-SiO2 melts were measured at 1100 K in the wide range of compositions using the Knudsen mass spectrometric method. The data enabled the PbO, B2O3, and SiO2 activities in these melts to be derived and provided evidence of their negative deviations from ideal behavior. Copyright © 2013 John Wiley & Sons, Ltd.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Rotational KMS States and Type I Conformal Nets
NASA Astrophysics Data System (ADS)
Longo, Roberto; Tanimoto, Yoh
2018-01-01
We consider KMS states on a local conformal net on S 1 with respect to rotations. We prove that, if the conformal net is of type I, namely if it admits only type I DHR representations, then the extremal KMS states are the Gibbs states in an irreducible representation. Completely rational nets, the U(1)-current net, the Virasoro nets and their finite tensor products are shown to be of type I. In the completely rational case, we also give a direct proof that all factorial KMS states are Gibbs states.
Diffusive mixing and Tsallis entropy
O'Malley, Daniel; Vesselinov, Velimir V.; Cushman, John H.
2015-04-29
Brownian motion, the classical diffusive process, maximizes the Boltzmann-Gibbs entropy. The Tsallis q-entropy, which is non-additive, was developed as an alternative to the classical entropy for systems which are non-ergodic. A generalization of Brownian motion is provided that maximizes the Tsallis entropy rather than the Boltzmann-Gibbs entropy. This process is driven by a Brownian measure with a random diffusion coefficient. In addition, the distribution of this coefficient is derived as a function of q for 1 < q < 3. Applications to transport in porous media are considered.
NASA Astrophysics Data System (ADS)
Naumov, V. V.; Isaeva, V. A.; Kuzina, E. N.; Sharnin, V. A.
2012-12-01
Gibbs energies for the transfer of glycylglycine and glycylglycinate ions from water to water-dimethylsulfoxide solvents are determined from the interface distribution of substances between immiscible phases in the composition range of 0.00 to 0.20 molar fractions of DMSO at 298.15 K. It is shown that with a rise in the concentration of nonaqueous components in solution, we observe the solvation of dipeptide and its anion, due mainly to the destabilization of the carboxyl group.
Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits
NASA Astrophysics Data System (ADS)
Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas
2018-04-01
We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.
Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality
NASA Astrophysics Data System (ADS)
Ayala, Mario; Carinci, Gioia; Redig, Frank
2018-06-01
We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.
A numerical spectral approach to solve the dislocation density transport equation
NASA Astrophysics Data System (ADS)
Djaka, K. S.; Taupin, V.; Berbenni, S.; Fressengeas, C.
2015-09-01
A numerical spectral approach is developed to solve in a fast, stable and accurate fashion, the quasi-linear hyperbolic transport equation governing the spatio-temporal evolution of the dislocation density tensor in the mechanics of dislocation fields. The approach relies on using the Fast Fourier Transform algorithm. Low-pass spectral filters are employed to control both the high frequency Gibbs oscillations inherent to the Fourier method and the fast-growing numerical instabilities resulting from the hyperbolic nature of the transport equation. The numerical scheme is validated by comparison with an exact solution in the 1D case corresponding to dislocation dipole annihilation. The expansion and annihilation of dislocation loops in 2D and 3D settings are also produced and compared with finite element approximations. The spectral solutions are shown to be stable, more accurate for low Courant numbers and much less computation time-consuming than the finite element technique based on an explicit Galerkin-least squares scheme.
Personalized anticancer therapy selection using molecular landscape topology and thermodynamics.
Rietman, Edward A; Scott, Jacob G; Tuszynski, Jack A; Klement, Giannoula Lakka
2017-03-21
Personalized anticancer therapy requires continuous consolidation of emerging bioinformatics data into meaningful and accurate information streams. The use of novel mathematical and physical approaches, namely topology and thermodynamics can enable merging differing data types for improved accuracy in selecting therapeutic targets. We describe a method that uses chemical thermodynamics and two topology measures to link RNA-seq data from individual patients with academically curated protein-protein interaction networks to select clinically relevant targets for treatment of low-grade glioma (LGG). We show that while these three histologically distinct tumor types (astrocytoma, oligoastrocytoma, and oligodendroglioma) may share potential therapeutic targets, the majority of patients would benefit from more individualized therapies. The method involves computing Gibbs free energy of the protein-protein interaction network and applying a topological filtration on the energy landscape to produce a subnetwork known as persistent homology. We then determine the most likely best target for therapeutic intervention using a topological measure of the network known as Betti number. We describe the algorithm and discuss its application to several patients.
NASA Astrophysics Data System (ADS)
Guest, Will; Cashman, Neil; Plotkin, Steven
2009-03-01
Protein misfolding is a necessary step in the pathogenesis of many diseases, including Creutzfeldt-Jakob disease (CJD) and familial amyotrophic lateral sclerosis (fALS). Identifying unstable structural elements in their causative proteins elucidates the early events of misfolding and presents targets for inhibition of the disease process. An algorithm was developed to calculate the Gibbs free energy of unfolding for all sequence-contiguous regions of a protein using three methods to parameterize energy changes: a modified G=o model, changes in solvent-accessible surface area, and solution of the Poisson-Boltzmann equation. The entropic effects of disulfide bonds and post-translational modifications are treated analytically. It incorporates a novel method for finding local dielectric constants inside a protein to accurately handle charge effects. We have predicted the unstable parts of prion protein and superoxide dismutase 1, the proteins involved in CJD and fALS respectively, and have used these regions as epitopes to prepare antibodies that are specific to the misfolded conformation and show promise as therapeutic agents.
Identifying Unstable Regions of Proteins Involved in Misfolding Diseases
NASA Astrophysics Data System (ADS)
Guest, Will; Cashman, Neil; Plotkin, Steven
2009-05-01
Protein misfolding is a necessary step in the pathogenesis of many diseases, including Creutzfeldt-Jakob disease (CJD) and familial amyotrophic lateral sclerosis (fALS). Identifying unstable structural elements in their causative proteins elucidates the early events of misfolding and presents targets for inhibition of the disease process. An algorithm was developed to calculate the Gibbs free energy of unfolding for all sequence-contiguous regions of a protein using three methods to parameterize energy changes: a modified G=o model, changes in solvent-accessible surface area, and all-atoms molecular dynamics. The entropic effects of disulfide bonds and post-translational modifications are treated analytically. It incorporates a novel method for finding local dielectric constants inside a protein to accurately handle charge effects. We have predicted the unstable parts of prion protein and superoxide dismutase 1, the proteins involved in CJD and fALS respectively, and have used these regions as epitopes to prepare antibodies that are specific to the misfolded conformation and show promise as therapeutic agents.
Kouritzin, Michael A; Newton, Fraser; Wu, Biao
2013-04-01
Herein, we propose generating CAPTCHAs through random field simulation and give a novel, effective and efficient algorithm to do so. Indeed, we demonstrate that sufficient information about word tests for easy human recognition is contained in the site marginal probabilities and the site-to-nearby-site covariances and that these quantities can be embedded directly into certain conditional probabilities, designed for effective simulation. The CAPTCHAs are then partial random realizations of the random CAPTCHA word. We start with an initial random field (e.g., randomly scattered letter pieces) and use Gibbs resampling to re-simulate portions of the field repeatedly using these conditional probabilities until the word becomes human-readable. The residual randomness from the initial random field together with the random implementation of the CAPTCHA word provide significant resistance to attack. This results in a CAPTCHA, which is unrecognizable to modern optical character recognition but is recognized about 95% of the time in a human readability study.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Dynamics of Contact Line Pinning and Depinning of Droplets Evaporating on Microribs.
Mazloomi Moqaddam, Ali; Derome, Dominique; Carmeliet, Jan
2018-05-15
The contact line dynamics of evaporating droplets deposited on a set of parallel microribs is analyzed with the use of a recently developed entropic lattice Boltzmann model for two-phase flow. Upon deposition, part of the droplet penetrates into the space between ribs because of capillary action, whereas the remaining liquid of the droplet remains pinned on top of the microribs. In the first stage, evaporation continues until the droplet undergoes a series of pinning-depinning events, showing alternatively the constant contact radius and constant contact angle modes. While the droplet is pinned, evaporation results in a contact angle reduction, whereas the contact radius remains constant. At a critical contact angle, the contact line depins, the contact radius reduces, and the droplet rearranges to a larger apparent contact angle. This pinning-depinning behavior goes on until the liquid above the microribs is evaporated. By computing the Gibbs free energy taking into account the interfacial energy, pressure terms, and viscous dissipation due to drop internal flow, we found that the mechanism that causes the unpinning of the contact line results from an excess in Gibbs free energy. The spacing distance and the rib height play an important role in controlling the pinning-depinning cycling, the critical contact angle, and the excess Gibbs free energy. However, we found that neither the critical contact angle nor the maximum excess Gibbs free energy depends on the rib width. We show that the different terms, that is, pressure term, viscous dissipation, and interfacial energy, contributing to the excess Gibbs free energy, can be varied differently by varying different geometrical properties of the microribs. It is demonstrated that, by varying the spacing distance between the ribs, the energy barrier is controlled by the interfacial energy while the contribution of the viscous dissipation is dominant if either rib height or width is changed. Main finding of this is study is that, for microrib patterned surfaces, the energy barrier required for the contact line to depin can be enlarged by increasing the spacing or the rib height, which can be important for practical applications.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Chieh; Sun, Jhen-Kai; Tsao, Che-Chang; Chuang, Po-Yang
2017-08-01
Effects of bottom electrodes (BEs) of Al, Mo, and Pt on resistive switching characteristics of sol-gel HfOx films were investigated in this work. To avoid influences of plasma or thermal energy on HfOx RS characteristic, the top electrodes were formed by pressing indium balls onto the HfOx surface rather than by using a sputter or an evaporator. When using Mo as the BE, the as-deposited HfOx film can give a forming-free resistive switching behavior with low set/reset voltages of 0.28 V / - 0.54 V. In contrast, non-switching characteristics of the HfOx films were observed when using Al and Pt as the BEs. The HfOx conduction current was found to be highly dependent on the BE. However, when an annealing process at 350 °C in an oxygen ambient was performed to the HfOx films on different BEs, the resistive switching behavior of the HfOx/Mo was absent while it can be found in the HfOx/Al sample. Differences in I-V characteristics of the HfOx films on different BEs were explained by considering Gibbs free energies of interfacial oxide layers. X-ray photoelectron spectroscopy (XPS) depth profile was used to examine the interfacial oxide layer. The resistive switching mechanism was also studied.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Ji, Jiayuan; Zhao, Lingling; Tao, Lu; Lin, Shangchao
2017-06-29
In CO 2 geological storage, the interfacial tension (IFT) between supercritical CO 2 and brine is critical for the storage capacitance design to prevent CO 2 leakage. IFT relies not only on the interfacial molecule properties but also on the environmental conditions at different storage sites. In this paper, supercritical CO 2 -NaCl solution systems are modeled at 343-373 K and 6-35 MPa under the salinity of 1.89 mol/L using molecular dynamics simulations. After computing and comparing the molecular density profile across the interface, the atomic radial distribution function, the molecular orientation distribution, the molecular Gibbs surface excess (derived from the molecular density profile), and the CO 2 -hydrate number density under the above environmental conditions, we confirm that only the molecular Gibbs surface excess of CO 2 molecules and the CO 2 -hydrate number density correlate strongly with the temperature- and pressure-dependent IFTs. We also compute the populations of two distinct CO 2 -hydrate structures (T-type and H-type) and attribute the observed dependence of IFTs to the dominance of the more stable, surfactant-like T-type CO 2 -hydrates at the interface. On the basis of these new molecular mechanisms behind IFT variations, this study could guide the rational design of suitable injecting environmental pressure and temperature conditions. We believe that the above two molecular-level metrics (Gibbs surface excess and hydrate number density) are of great fundamental importance for understanding the supercritical CO 2 -water interface and engineering applications in geological CO 2 storage.
A semi-Lagrangian advection scheme for radioactive tracers in the NCEP Regional Spectral Model (RSM)
NASA Astrophysics Data System (ADS)
Chang, E.-C.; Yoshimura, K.
2015-10-01
In this study, the non-iteration dimensional-split semi-Lagrangian (NDSL) advection scheme is applied to the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) to alleviate the Gibbs phenomenon. The Gibbs phenomenon is a problem wherein negative values of positive-definite quantities (e.g., moisture and tracers) are generated by the spectral space transformation in a spectral model system. To solve this problem, the spectral prognostic specific humidity and radioactive tracer advection scheme is replaced by the NDSL advection scheme, which considers advection of tracers in a grid system without spectral space transformations. A regional version of the NDSL is developed in this study and is applied to the RSM. Idealized experiments show that the regional version of the NDSL is successful. The model runs for an actual case study suggest that the NDSL can successfully advect radioactive tracers (iodine-131 and cesium-137) without noise from the Gibbs phenomenon. The NDSL can also remove negative specific humidity values produced in spectral calculations without losing detailed features.
Shielding property for thermal equilibrium states in the quantum Ising model
NASA Astrophysics Data System (ADS)
Móller, N. S.; de Paula, A. L.; Drumond, R. C.
2018-03-01
We show that Gibbs states of nonhomogeneous transverse Ising chains satisfy a shielding property. Namely, whatever the fields on each spin and exchange couplings between neighboring spins are, if the field in one particular site is null, then the reduced states of the subchains to the right and to the left of this site are exactly the Gibbs states of each subchain alone. Therefore, even if there is a strong exchange coupling between the extremal sites of each subchain, the Gibbs states of the each subchain behave as if there is no interaction between them. In general, if a lattice can be divided into two disconnected regions separated by an interface of sites with zero applied field, then we can guarantee a similar result only if the surface contains a single site. Already for an interface with two sites we show an example where the property does not hold. When it holds, however, we show that if a perturbation of the Hamiltonian parameters is done in one side of the lattice, then the other side is completely unchanged, with regard to both its equilibrium state and dynamics.
Hemingway, B.S.
1990-01-01
Smoothed values of the heat capacities and derived thermodynamic functions are given for bunsenite, magnetite, and hematite for the temperature interval 298.15 to 1800 K. The Gibbs free energy for the reaction Ni + 0.5O2 = NiO is given by the equation ??rG0T = -238.39 + 0.1146T - 3.72 ?? 10-3T ln T and is valid from 298.15 K to 1700 K. The Gibbs free energy (in kJ) of the reaction 2 magnetite + 3 quartz = 3 fayalite + O2 may be calculated from the equation ??rG0T = 474.155 - 0.16120 T in kJ and between 800 and 1400 K. The Gibbs free energy (in kJ) of the reaction 6 hematite = 4 magnetite + O2 may be calculated from the following equations: ??rG0T = 496.215 - 0.27114T, ??rG0T = 514.690 - 0.29753T, ??rG0T = 501.348 - 0.2854T. -from Author
Standard Gibbs energy of formation of Mo 3Te 4 by emf measurements
NASA Astrophysics Data System (ADS)
Mallika, C.; Sreedharan, O. M.
1990-03-01
The emf of the galvanic cells Pt, Mo, MoO 2¦8 YSZ¦'FeO', Fe, Pt (I) and Pt, Fe,'FeO' ¦8 YSZ¦MoO 2, Mo 3Te 4, MoTe 2(α), C, Pt (II) were measured over the temperature ranges 837 to 1151 K and 775 to 1196 K, respectively, using 8 mass% yttria-stabilized zirconia (8 YSZ) as the solid electrolyte. From the emf values, the partial molar Gibbs energy of solution of molybdenum in Mo 3Te 4/MoTe 2(α), Δ ḠMo was found to be Δ ḠMo ± 1.19 ( kJ/mol) = -025.08 + 0.00420T(K) . Using the literature data for the Gibbs energy of formation of MoTe 2(α). the expression ΔG° f( Mo3Te4, s) ± 5.97 (kj/mol) = -253.58 + 0.09214 T( K) was derived for the range 775 to 1196 K. A third-law analysis yielded a value of -209 ± 10 kJ/mol for ΔH° f.298o of Mo 3Te 4(s).
Interfacial interactions between plastic particles in plastics flotation.
Wang, Chong-qing; Wang, Hui; Gu, Guo-hua; Fu, Jian-gang; Lin, Qing-quan; Liu, You-nian
2015-12-01
Plastics flotation used for recycling of plastic wastes receives increasing attention for its industrial application. In order to study the mechanism of plastics flotation, the interfacial interactions between plastic particles in flotation system were investigated through calculation of Lifshitz-van der Waals (LW) function, Lewis acid-base (AB) Gibbs function, and the extended Derjaguin-Landau-Verwey-Overbeek potential energy profiles. The results showed that van der Waals force between plastic particles is attraction force in flotation system. The large hydrophobic attraction, caused by the AB Gibbs function, is the dominant interparticle force. Wetting agents present significant effects on the interfacial interactions between plastic particles. It is found that adsorption of wetting agents promotes dispersion of plastic particles and decreases the floatability. Pneumatic flotation may improve the recovery and purity of separated plastics through selective adsorption of wetting agents on plastic surface. The relationships between hydrophobic attraction and surface properties were also examined. It is revealed that there exists a three-order polynomial relationship between the AB Gibbs function and Lewis base component. Our finding provides some insights into mechanism of plastics flotation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Vogel, Thomas; Perez, Danny
2015-08-28
We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. Furthermore, themore » method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.« less
Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy
NASA Technical Reports Server (NTRS)
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1993-01-01
The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, Delta G(AC), between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that Delta G(AC) continuously increases as the temperature decreases in contrast to the ideal glass case where Delta G(AC) is assumed to be independent of temperature.
Solvation thermodynamics of L-cystine, L-tyrosine, and L-leucine in aqueous-electrolyte media
NASA Astrophysics Data System (ADS)
Roy, Sanjay; Guin, Partha Sarathi; Mahali, Kalachand; Dolui, Bijoy Krishna
2017-12-01
Solubilities of L-cystine, L-tyrosine, and L-leucine in aqueous NaCl media at 298.15 K have been studied. Indispensable and related solvent parameters such as molar mass, molar volume, etc., were also determined. The results are used to evaluate the standard transfer Gibbs free energy, cavity forming enthalpy of transfer, cavity forming transfer Gibbs free energy and dipole-dipole interaction effects during the course of solvation. Various weak interactions involving solute-solvent or solvent-solvent molecules were characterized in order to find their role on the solvation of these amino acids.
Thermodynamics of BTZ black holes in gravity’s rainbow
NASA Astrophysics Data System (ADS)
Alsaleh, Salwa
2017-05-01
In this paper, we deform the thermodynamics of a BTZ black hole from rainbow functions in gravity’s rainbow. The rainbow functions will be motivated from the results in loop quantum gravity and noncommutative geometry. It will be observed that the thermodynamics gets deformed due to these rainbow functions, indicating the existence of a remnant. However, the Gibbs free energy does not get deformed due to these rainbow functions, and so the critical behavior from Gibbs does not change by this deformation. This is because the deformation in the entropy cancels out the temperature deformation.
Hodge, Ian M
2006-08-01
The nonlinear thermorheologically complex Adam Gibbs (extended "Scherer-Hodge") model for the glass transition is applied to enthalpy relaxation data reported by Sartor, Mayer, and Johari for hydrated methemoglobin. A sensible range in values for the average localized activation energy is obtained (100-200 kJ mol(-1)). The standard deviation in the inferred Gaussian distribution of activation energies, computed from the reported KWW beta-parameter, is approximately 30% of the average, consistent with the suggestion that some relaxation processes in hydrated proteins have exceptionally low activation energies.
On the dispute between Boltzmann and Gibbs entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonsante, Pierfrancesco; Franzosi, Roberto, E-mail: roberto.franzosi@ino.it; Smerzi, Augusto
2016-12-15
The validity of the concept of negative temperature has been recently challenged by arguing that the Boltzmann entropy (that allows negative temperatures) is inconsistent from a mathematical and statistical point of view, whereas the Gibbs entropy (that does not admit negative temperatures) provides the correct definition for the microcanonical entropy. Here we prove that the Boltzmann entropy is thermodynamically and mathematically consistent. Analytical results on two systems supporting negative temperatures illustrate the scenario we propose. In addition we numerically study a lattice system to show that negative temperature equilibrium states are accessible and obey standard statistical mechanics prediction.
2010-01-01
Background Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Results Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Conclusions Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data. PMID:21062443
Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang
2016-11-16
The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.
A sampling algorithm for segregation analysis
Tier, Bruce; Henshall, John
2001-01-01
Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631
Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G
2014-12-01
F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
An analysis of the massless planet approximation in transit light curve models
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, Gerry
2015-08-01
Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.
Nangia, Shikha; Jasper, Ahren W; Miller, Thomas F; Truhlar, Donald G
2004-02-22
The most widely used algorithm for Monte Carlo sampling of electronic transitions in trajectory surface hopping (TSH) calculations is the so-called anteater algorithm, which is inefficient for sampling low-probability nonadiabatic events. We present a new sampling scheme (called the army ants algorithm) for carrying out TSH calculations that is applicable to systems with any strength of coupling. The army ants algorithm is a form of rare event sampling whose efficiency is controlled by an input parameter. By choosing a suitable value of the input parameter the army ants algorithm can be reduced to the anteater algorithm (which is efficient for strongly coupled cases), and by optimizing the parameter the army ants algorithm may be efficiently applied to systems with low-probability events. To demonstrate the efficiency of the army ants algorithm, we performed atom-diatom scattering calculations on a model system involving weakly coupled electronic states. Fully converged quantum mechanical calculations were performed, and the probabilities for nonadiabatic reaction and nonreactive deexcitation (quenching) were found to be on the order of 10(-8). For such low-probability events the anteater sampling scheme requires a large number of trajectories ( approximately 10(10)) to obtain good statistics and converged semiclassical results. In contrast by using the new army ants algorithm converged results were obtained by running 10(5) trajectories. Furthermore, the results were found to be in excellent agreement with the quantum mechanical results. Sampling errors were estimated using the bootstrap method, which is validated for use with the army ants algorithm. (c) 2004 American Institute of Physics.
Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection.
Barreto, A M E C; Takei, K; E C, Sabino; Bellesa, M A O; Salles, N A; Barreto, C C; Nishiya, A S; Chamone, D F
2008-02-01
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio > or =95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Reig, L; Amigó, V; Busquets, D; Calero, J A; Ortiz, J L
2012-08-01
Porous Ti6Al4V samples were produced by microsphere sintering. The Zero-Order Reaction Rate Model and Transition State Theory were used to model the sintering process and to estimate the bending strength of the porous samples developed. The evolution of the surface area during the sintering process was used to obtain sintering parameters (sintering constant, activation energy, frequency factor, constant of activation and Gibbs energy of activation). These were then correlated with the bending strength in order to obtain a simple model with which to estimate the evolution of the bending strength of the samples when the sintering temperature and time are modified: σY=P+B·[lnT·t-ΔGa/R·T]. Although the sintering parameters were obtained only for the microsphere sizes analysed here, the strength of intermediate sizes could easily be estimated following this model. Copyright © 2012 Elsevier B.V. All rights reserved.
Stress versus temperature dependence of activation energies for creep
NASA Technical Reports Server (NTRS)
Freed, A. D.; Raj, S. V.; Walker, K. P.
1992-01-01
The activation energy for creep at low stresses and elevated temperatures is associated with lattice diffusion, where the rate controlling mechanism for deformation is dislocation climb. At higher stresses and intermediate temperatures, the rate controlling mechanism changes from dislocation climb to obstacle-controlled dislocation glide. Along with this change in deformation mechanism occurs a change in the activation energy. When the rate controlling mechanism for deformation is obstacle-controlled dislocation glide, it is shown that a temperature-dependent Gibbs free energy does better than a stress-dependent Gibbs free energy in correlating steady-state creep data for both copper and LiF-22mol percent CaF2 hypereutectic salt.
Impact of uncertainty in expected return estimation on stock price volatility
NASA Astrophysics Data System (ADS)
Kostanjcar, Zvonko; Jeren, Branko; Juretic, Zeljan
2012-11-01
We investigate the origin of volatility in financial markets by defining an analytical model for time evolution of stock share prices. The defined model is similar to the GARCH class of models, but can additionally exhibit bimodal behaviour in the supply-demand structure of the market. Moreover, it differs from existing Ising-type models. It turns out that the constructed model is a solution of a thermodynamic limit of a Gibbs probability measure when the number of traders and the number of stock shares approaches infinity. The energy functional of the Gibbs probability measure is derived from the Nash equilibrium of the underlying game.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, G.; Schneider-Henriquez, J.E.; Fendler, J.H.
Two-exposure interferometric holograms have been shown to sensitively report ultrasmall-pressure (10 natm)-induced curvature changes in glyceryl monooleate (GMO) bilayer lipid membranes (BLMs). The number of concentric fringes observed, and hence the lateral distance between the plane of the Teflon and the BLM, increased linearly with increasing transmembrane pressure and led to a value of 1.1 {plus minus} 0.05 dyn/cm for the surface tension of the BLM. BLMs with appreciable Plateau-Gibbs borders have been shown to undergo nonuniform deformation; the bilayer portion is distorted less than the surrounding Plateau-Gibbs border upon the application of a transmembrane pressure gradient.
Gutman, E M
2010-10-27
In a recent publication by Olives (2010 J. Phys.: Condens. Matter 22 085005) he studied 'the thermodynamics and mechanics of the surface of a deformable body, following and refining the general approach of Gibbs' and believed that 'a new definition of the surface stress is given'. However, using the usual way of deriving the equations of Gibbs-Duhem type the author, nevertheless, has fallen into a mathematical discrepancy because he has tried to unite in one equation different thermodynamic systems and 'a new definition of the surface stress' has appeared known in the usual theory of elasticity.
Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)
NASA Astrophysics Data System (ADS)
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene
2012-07-01
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Size fluctuations of near critical nuclei and Gibbs free energy for nucleation of BDA on Cu(001).
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J W; Poelsema, Bene
2012-07-06
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Preferential Solvation of Silver (I) Bromate in Methanol-Dimethylsulfoxide Mixtures
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Kalidas, C.
1984-06-01
The solubiltiy of silver bromate, the Gibbs transfer energy of Ag+ and BrO3- and the solvent transport number in methanol-dimethyl sulfoxide mixtures are reported. The solubility of silver bromate increases with addition of DMSO. The Gibbs energy of transfer of the silver ion (based on the ferrocene reference method) decreases, while that of the bromate ion becomes slightly negative with the addition of DMSO. The solvent transport number A passes through a maximum (⊿ = 1.0 at XDMSO = 0.65. From these results, it is concluded that the silver ion is preferentially solvated by DMSO whereas the bromate ion shows no preferential solvation.
NASA Astrophysics Data System (ADS)
Duque, Michel; Andraca, Adriana; Goldstein, Patricia; del Castillo, Luis Felipe
2018-04-01
The Adam-Gibbs equation has been used for more than five decades, and still a question remains unanswered on the temperature dependence of the chemical potential it includes. Nowadays, it is a well-known fact that in fragile glass formers, actually the behavior of the system depends on the temperature region it is being studied. Transport coefficients change due to the appearance of heterogeneity in the liquid as it is supercooled. Using the different forms for the logarithmic shift factor and the form of the configurational entropy, we evaluate this temperature dependence and present a discussion on our results.
Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi
2016-11-08
We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.
NASA Astrophysics Data System (ADS)
Gelb, Lev D.; Chakraborty, Somendra Nath
2011-12-01
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase.
Gibbs-Thomson Effect in Planar Nanowires: Orientation and Doping Modulated Growth.
Shen, Youde; Chen, Renjie; Yu, Xuechao; Wang, Qijie; Jungjohann, Katherine L; Dayeh, Shadi A; Wu, Tom
2016-07-13
Epitaxy-enabled bottom-up synthesis of self-assembled planar nanowires via the vapor-liquid-solid mechanism is an emerging and promising approach toward large-scale direct integration of nanowire-based devices without postgrowth alignment. Here, by examining large assemblies of indium tin oxide nanowires on yttria-stabilized zirconia substrate, we demonstrate for the first time that the growth dynamics of planar nanowires follows a modified version of the Gibbs-Thomson mechanism, which has been known for the past decades to govern the correlations between thermodynamic supersaturation, growth speed, and nanowire morphology. Furthermore, the substrate orientation strongly influences the growth characteristics of epitaxial planar nanowires as opposed to impact at only the initial nucleation stage in the growth of vertical nanowires. The rich nanowire morphology can be described by a surface-energy-dependent growth model within the Gibbs-Thomson framework, which is further modulated by the tin doping concentration. Our experiments also reveal that the cutoff nanowire diameter depends on the substrate orientation and decreases with increasing tin doping concentration. These results enable a deeper understanding and control over the growth of planar nanowires, and the insights will help advance the fabrication of self-assembled nanowire devices.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
Cerebella segmentation on MR images of pediatric patients with medulloblastoma
NASA Astrophysics Data System (ADS)
Shan, Zu Y.; Ji, Qing; Glass, John; Gajjar, Amar; Reddick, Wilburn E.
2005-04-01
In this study, an automated method has been developed to identify the cerebellum from T1-weighted MR brain images of patients with medulloblastoma. A new objective function that is similar to Gibbs free energy in classic physics was defined; and the brain structure delineation was viewed as a process of minimizing Gibbs free energy. We used a rigid-body registration and an active contour (snake) method to minimize the Gibbs free energy in this study. The method was applied to 20 patient data sets to generate cerebellum images and volumetric results. The generated cerebellum images were compared with two manually drawn results. Strong correlations were found between the automatically and manually generated volumetric results, the correlation coefficients with each of manual results were 0.971 and 0.974, respectively. The average Jaccard similarities with each of two manual results were 0.89 and 0.88, respectively. The average Kappa indexes with each of two manual results were 0.94 and 0.93, respectively. These results showed this method was both robust and accurate for cerebellum segmentation. The method may be applied to various research and clinical investigation in which cerebellum segmentation and quantitative MR measurement of cerebellum are needed.
Ab Initio Prediction of Adsorption Isotherms for Small Molecules in Metal-Organic Frameworks.
Kundu, Arpan; Piccini, GiovanniMaria; Sillar, Kaido; Sauer, Joachim
2016-10-26
For CO and N 2 on Mg 2+ sites of the metal-organic framework CPO-27-Mg (Mg-MOF-74), ab initio calculations of Gibbs free energies of adsorption have been performed. Combined with the Bragg-Williams/Langmuir model and taking into account the experimental site availability (76.5%), we obtained adsorption isotherms in close agreement with those in experiment. The remaining deviations in the Gibbs free energy (about 1 kJ/mol) are significantly smaller than the "chemical accuracy" limit of about 4 kJ/mol. The presented approach uses (i) a DFT dispersion method (PBE+D2) to optimize the structure and to calculate anharmonic frequencies for vibrational partition functions and (ii) a "hybrid MP2:(PBE+D2)+ΔCCSD(T)" method to determine electronic energies. With the achieved accuracy (estimated uncertainty ±1.4 kJ/mol), the ab initio energies become useful benchmarks for assessing different DFT + dispersion methods (PBE+D2, B3LYP+D*, and vdW-D2), whereas the ab initio heats, entropies, and Gibbs free energies of adsorption are used to assess the reliability of experimental values derived from fitting isotherms or from variable-temperature IR studies.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth
2013-01-01
Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890
Solid state amorphization of nanocrystalline nickel by cryogenic laser shock peening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Chang, E-mail: cye@uakron.edu; Ren, Zhencheng; Zhao, Jingyi
2015-10-07
In this study, complete solid state amorphization in nanocrystalline nickel has been achieved through cryogenic laser shock peening (CLSP). High resolution transmission electron microscopy has revealed the complete amorphous structure of the sample after CLSP processing. A molecular dynamic model has been used to investigate material behavior during the shock loading and the effects of nanoscale grain boundaries on the amorphization process. It has been found that the initial nanoscale grain boundaries increase the initial Gibbs free energy before plastic deformation and also serve as dislocation emission sources during plastic deformation to contribute to defect density increase, leading to themore » amorphization of pure nanocrystalline nickel.« less
Thermochemical investigations in the system Cd–Gd
Reichmann, Thomas L.; Ganesan, Rajesh; Ipser, Herbert
2014-01-01
Vapour pressure measurements were performed in terms of a non-isothermal isopiestic method to determine vapour pressures of Cd in the system Cd–Gd between 693 and 1045 K. From these results thermodynamic activities of Cd were derived as a function of temperature for the composition range 52–86 at.% Cd. By employing an adapted Gibbs–Helmholtz equation, partial molar enthalpies of mixing of Cd were obtained for the corresponding composition range, which were used to convert the activity values of Cd to a common average sample temperature of 773 K. The relatively large variation of the activity across the homogeneity ranges of the phases Cd2Gd and Cd45Gd11 indicates that they probably belong to the most stable intermetallic compounds in this system. An activity value of Gd for the two phase field Cd6Gd+L was available from literature and served as an integration constant for a Gibbs–Duhem integration. Integral Gibbs energies are presented between 51 and 100 at.% Cd at 773 K, referred to Cd(l) and α-Gd(s) as standard states. Gibbs energies of formation for the exact stoichiometric compositions of the phases Cd58Gd13, Cd45Gd11, Cd3Gd and Cd2Gd were obtained at 773 K as about −19.9, −21.1, −24.8, and −30.0 kJ g atom−1, respectively. PMID:25328283
Hydrogeochemical quality and suitability studies of groundwater in northern Bangladesh.
Islam, M J; Hakim, M A; Hanafi, M M; Juraimi, Abdul Shukor; Aktar, Sharmin; Siddiqa, Aysha; Rahman, A K M Shajedur; Islam, M Atikul; Halim, M A
2014-07-01
Agriculture, rapid urbanization and geochemical processes have direct or indirect effects on the chemical composition of groundwater and aquifer geochemistry. Hydro-chemical investigations, which are significant for assessment of water quality, were carried out to study the sources of dissolved ions in groundwater of Dinajpur district, northern Bangladesh. The groundwater samplish were analyzed for physico-chemical properties like pH, electrical conductance, hardness, alkalinity, total dissolved solids and Ca2+, Mg2+, Na+, K+, CO3(2-), HCO3(-), SO4(2-) and Cl- ions, respectively. Based on the analyses, certain parameters like sodium adsorption ratio, soluble sodium percentage, potential salinity, residual sodium carbonate, Kelly's ratio, permeability index and Gibbs ratio were also calculated. The results showed that the groundwater of study area was fresh, slightly acidic (pH 5.3-6.4) and low in TDS (35-275 mg I(-1)). Ground water of the study area was found suitable for irrigation, drinking and domestic purposes, since most of the parameters analyzed were within the WHO recommended values for drinking water. High concentration of NO3- and Cl- was reported in areas with extensive agriculture and rapid urbanization. Ion-exchange, weathering, oxidation and dissolution of minerals were major geochemical processes governing the groundwater evolution in study area. Gibb's diagram showed that all the samples fell in the rock dominance field. Based on evaluation, it is clear that groundwater quality of the study area was suitable for both domestic and irrigation purposes.
Vectorized Rebinning Algorithm for Fast Data Down-Sampling
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Jeffrey
2013-01-01
A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Zhou, Zhengwei; Bi, Xiaoming; Wei, Janet; Yang, Hsin-Jung; Dharmakumar, Rohan; Arsanjani, Reza; Bairey Merz, C Noel; Li, Debiao; Sharif, Behzad
2017-02-01
The presence of subendocardial dark-rim artifact (DRA) remains an ongoing challenge in first-pass perfusion (FPP) cardiac magnetic resonance imaging (MRI). We propose a free-breathing FPP imaging scheme with Cartesian sampling that is optimized to minimize the DRA and readily enables near-instantaneous image reconstruction. The proposed FPP method suppresses Gibbs ringing effects-a major underlying factor for the DRA-by "shaping" the underlying point spread function through a two-step process: 1) an undersampled Cartesian sampling scheme that widens the k-space coverage compared to the conventional scheme; and 2) a modified parallel-imaging scheme that incorporates optimized apodization (k-space data filtering) to suppress Gibbs-ringing effects. Healthy volunteer studies (n = 10) were performed to compare the proposed method against the conventional Cartesian technique-both using a saturation-recovery gradient-echo sequence at 3T. Furthermore, FPP imaging studies using the proposed method were performed in infarcted canines (n = 3), and in two symptomatic patients with suspected coronary microvascular dysfunction for assessment of myocardial hypoperfusion. Width of the DRA and the number of DRA-affected myocardial segments were significantly reduced in the proposed method compared to the conventional approach (width: 1.3 vs. 2.9 mm, P < 0.001; number of segments: 2.6 vs. 8.7; P < 0.0001). The number of slices with severe DRA was markedly lower for the proposed method (by 10-fold). The reader-assigned image quality scores were similar (P = 0.2), although the quantified myocardial signal-to-noise ratio was lower for the proposed method (P < 0.05). Animal studies showed that the proposed method can detect subendocardial perfusion defects and patient results were consistent with the gold-standard invasive test. The proposed free-breathing Cartesian FPP imaging method significantly reduces the prevalence of severe DRAs compared to the conventional approach while maintaining similar resolution and image quality. 2 J. Magn. Reson. Imaging 2017;45:542-555. © 2016 International Society for Magnetic Resonance in Medicine.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
NASA Astrophysics Data System (ADS)
Shock, Everetr L.; Koretsky, Carla M.
1995-04-01
Regression of standard state equilibrium constants with the revised Helgeson-Kirkham-Flowers (HKF) equation of state allows evaluation of standard partial molal entropies ( overlineSo) of aqueous metal-organic complexes involving monovalent organic acid ligands. These values of overlineSo provide the basis for correlations that can be used, together with correlation algorithms among standard partial molal properties of aqueous complexes and equation-of-state parameters, to estimate thermodynamic properties including equilibrium constants for complexes between aqueous metals and several monovalent organic acid ligands at the elevated pressures and temperatures of many geochemical processes which involve aqueous solutions. Data, parameters, and estimates are given for 270 formate, propanoate, n-butanoate, n-pentanoate, glycolate, lactate, glycinate, and alanate complexes, and a consistent algorithm is provided for making other estimates. Standard partial molal entropies of association ( Δ -Sro) for metal-monovalent organic acid ligand complexes fall into at least two groups dependent upon the type of functional groups present in the ligand. It is shown that isothermal correlations among equilibrium constants for complex formation are consistent with one another and with similar correlations for inorganic metal-ligand complexes. Additional correlations allow estimates of standard partial molal Gibbs free energies of association at 25°C and 1 bar which can be used in cases where no experimentally derived values are available.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
New multirate sampled-data control law structure and synthesis algorithm
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng
1992-01-01
A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.
Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng
2017-01-01
Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic sub-sample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm × 2.5 cm × 2.5 cm]). PMID:28166493
Bayesian state space models for dynamic genetic network construction across multiple tissues.
Liang, Yulan; Kelemen, Arpad
2016-08-01
Construction of gene-gene interaction networks and potential pathways is a challenging and important problem in genomic research for complex diseases while estimating the dynamic changes of the temporal correlations and non-stationarity are the keys in this process. In this paper, we develop dynamic state space models with hierarchical Bayesian settings to tackle this challenge for inferring the dynamic profiles and genetic networks associated with disease treatments. We treat both the stochastic transition matrix and the observation matrix time-variant and include temporal correlation structures in the covariance matrix estimations in the multivariate Bayesian state space models. The unevenly spaced short time courses with unseen time points are treated as hidden state variables. Hierarchical Bayesian approaches with various prior and hyper-prior models with Monte Carlo Markov Chain and Gibbs sampling algorithms are used to estimate the model parameters and the hidden state variables. We apply the proposed Hierarchical Bayesian state space models to multiple tissues (liver, skeletal muscle, and kidney) Affymetrix time course data sets following corticosteroid (CS) drug administration. Both simulation and real data analysis results show that the genomic changes over time and gene-gene interaction in response to CS treatment can be well captured by the proposed models. The proposed dynamic Hierarchical Bayesian state space modeling approaches could be expanded and applied to other large scale genomic data, such as next generation sequence (NGS) combined with real time and time varying electronic health record (EHR) for more comprehensive and robust systematic and network based analysis in order to transform big biomedical data into predictions and diagnostics for precision medicine and personalized healthcare with better decision making and patient outcomes.
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
Quantum chemical approach to estimating the thermodynamics of metabolic reactions.
Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán
2014-11-12
Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.
Generalized Gibbs ensembles for quantum field theories
NASA Astrophysics Data System (ADS)
Essler, F. H. L.; Mussardo, G.; Panfil, M.
2015-05-01
We consider the nonequilibrium dynamics in quantum field theories (QFTs). After being prepared in a density matrix that is not an eigenstate of the Hamiltonian, such systems are expected to relax locally to a stationary state. In the presence of local conservation laws, these stationary states are believed to be described by appropriate generalized Gibbs ensembles. Here we demonstrate that in order to obtain a correct description of the stationary state, it is necessary to take into account conservation laws that are not (ultra)local in the usual sense of QFTs, but fulfill a significantly weaker form of locality. We discuss the implications of our results for integrable QFTs in one spatial dimension.
NASA Astrophysics Data System (ADS)
Kuz'mina, I. A.; Usacheva, T. R.; Kuz'mina, K. I.; Volkova, M. A.; Sharnin, V. A.
2015-01-01
The Gibbs energies of the transfer of 18-crown-6 ether from methanol to its mixtures with acetonitrile (χAN = 0.0-1.0 mole fraction) are determined by means of interphase distribution at 298 K. The effect the solvent composition has on the thermodynamic characteristics of the solvation of 18-crown-6 ether is analyzed. An increase in the content of acetonitrile in the mixed solvent enhances the solvation of crown ether due to changes in the energy of the solution. Resolvation of the macrocycle is assumed to be complete at acetonitrile concentrations higher than 0.6 mole fraction.
Marinsky, J.A.; Reddy, M.M.
1991-01-01
Earlier research has shown that the acid dissociation and metal ion complexation equilibria of linear, weak-acid polyelectrolytes and their cross-linked gel analogues are similarly sensitive to the counterion concentration levels of their solutions. Gibbs-Donnan-based concepts, applicable to the gel, are equally applicable to the linear polyelectrolyte for the accommodation of this sensitivity to ionic strength. This result is presumed to indicate that the linear polyelectrolyte in solution develops counterion-concentrating regions that closely resemble the gel phase of their analogues. Advantage has been taken of this description of linear polyelectrolytes to estimate the solvent uptake by these regions. ?? 1991 American Chemical Society.
Maxwell’s equal area law for Lovelock thermodynamics
NASA Astrophysics Data System (ADS)
Xu, Hao; Xu, Zhen-Ming
We present the construction of Maxwell’s equal area law for the Guass-Bonnet AdS black holes in d = 5, 6 and third-order Lovelock AdS black holes in d = 7, 8. The equal area law can be used to find the number and location of the points of intersection in the plots of Gibbs free energy, so that we can get the thermodynamically preferred solution which corresponds to the first-order phase transition. We obtain the radius of the small and large black holes in the phase transition which share the same Gibbs free energy. The case with two critical points is explored in much more details. The latent heat is also studied.
Phase equilibrium of methane and nitrogen at low temperatures - Application to Titan
NASA Technical Reports Server (NTRS)
Kouvaris, Louis C.; Flasar, F. M.
1991-01-01
Since the vapor phase composition of Titan's methane-nitrogen lower atmosphere is uniquely determined as a function of the Gibbs phase rule, these data are presently computed via integration of the Gibbs-Duhem equation. The thermodynamic consistency of published measurements and calculations of the vapor phase composition is then examined, and the saturated mole fraction of gaseous methane is computed as a function of altitude up to the 700-mbar level. The mole fraction is found to lie approximately halfway between that computed from Raoult's law, for a gas in equilibrium with an ideal solution of liquid nitrogen and methane, and that for a gas in equilibrium with pure liquid methane.
Stress versus temperature dependent activation energies in creep
NASA Technical Reports Server (NTRS)
Freed, A. D.; Raj, S. V.; Walker, K. P.
1990-01-01
The activation energy for creep at low stresses and elevated temperatures is lattice diffusion, where the rate controlling mechanism for deformation is dislocation climb. At higher stresses and intermediate temperatures, the rate controlling mechanism changes from that of dislocation climb to one of obstacle-controlled dislocation glide. Along with this change, there occurs a change in the activation energy. It is shown that a temperature-dependent Gibbs free energy does a good job of correlating steady-state creep data, while a stress-dependent Gibbs free energy does a less desirable job of correlating the same data. Applications are made to copper and a LiF-22 mol. percent CaF2 hypereutectic salt.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
Langmuir-Gibbs Surface Phases and Transitions
NASA Astrophysics Data System (ADS)
Ocko, Benjamin; Sloutskin, Eli; Sapir, Zvi; Tamam, Lilach; Deutsch, Moshe; Bain, Colin
2007-03-01
Recent synchrotron x-ray measurements reveal surface ordering transitions in films of medium-length linear hydrocarbons (alkanes), spread on the water surface. Alkanes longer than hexane do not spread on the free surface of water. However, sub-mM concentrations of some anionic surfactants (e.g. CTAB) induce formation of thermodynamically stable alkane monolayers, through a ``pseudo-partial wetting'' phenomenon[1]. The monolayers, incorporating both water-insoluble alkanes (Langmuir) and water-soluble CTAB molecules (Gibbs) are called Langmuir-Gibbs (LG) films. The films formed by alkanes with n <=17 exhibit ordering transition upon cooling [2], below which the molecules are normal to the water surface and hexagonally packed, with CTAB molecules randomly mixed inside the quasi-2D crystal. Alkanes with n>17 can not form ordered LG monolayers, due to the repulsion from the n=16 tails of CTAB. This repulsion arises from the two chains' length mismatch. A demixing transition occurs upon ordering, with a pure alkane quasi-2D crystal forming on top of disordered alkyl tails of CTAB molecules. [1] K.M. Wilkinson et al., Chem. Phys. Phys. Chem. 6, 547 (2005). [2] E. Sloutskin, Z. Sapir, L. Tamam, B.M. Ocko, C.D. Bain, and M. Deutsch, Thin Solid Films, in press; K.M. Wilkinson, L. Qunfang, and C.D. Bain, Soft Matter 2, 66 (2006).
NASA Astrophysics Data System (ADS)
Bagchi, Debarshee; Tsallis, Constantino
2017-04-01
The relaxation to equilibrium of two long-range-interacting Fermi-Pasta-Ulam-like models (β type) in thermal contact is numerically studied. These systems, with different sizes and energy densities, are coupled to each other by a few thermal contacts which are short-range harmonic springs. By using the kinetic definition of temperature, we compute the time evolution of temperature and energy density of the two systems. Eventually, for some time t >teq, the temperature and energy density of the coupled system equilibrate to values consistent with standard Boltzmann-Gibbs thermostatistics. The equilibration time teq depends on the system size N as teq ∼Nγ where γ ≃ 1.8. We compute the velocity distribution P (v) of the oscillators of the two systems during the relaxation process. We find that P (v) is non-Gaussian and is remarkably close to a q-Gaussian distribution for all times before thermal equilibrium is reached. During the relaxation process we observe q > 1 while close to t =teq the value of q converges to unity and P (v) approaches a Gaussian. Thus the relaxation phenomenon in long-ranged systems connected by a thermal contact can be generically described as a crossover from q-statistics to Boltzmann-Gibbs statistics.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Equilibrium Sampling in Biomolecular Simulation
2015-01-01
Equilibrium sampling of biomolecules remains an unmet challenge after more than 30 years of atomistic simulation. Efforts to enhance sampling capability, which are reviewed here, range from the development of new algorithms to parallelization to novel uses of hardware. Special focus is placed on classifying algorithms — most of which are underpinned by a few key ideas — in order to understand their fundamental strengths and limitations. Although algorithms have proliferated, progress resulting from novel hardware use appears to be more clear-cut than from algorithms alone, partly due to the lack of widely used sampling measures. PMID:21370970
Re-evaluation of P-T paths across the Himalayan Main Central Thrust
NASA Astrophysics Data System (ADS)
Catlos, E. J.; Harrison, M.; Kelly, E. D.; Ashley, K.; Lovera, O. M.; Etzel, T.; Lizzadro-McPherson, D. J.
2016-12-01
The Main Central Thrust (MCT) is the dominant crustal thickening structure in the Himalayas, juxtaposing high-grade Greater Himalayan Crystalline rocks over the lower-grade Lesser Himalaya Formations. The fault is underlain by a 2 to 12-km-thick sequence of deformed rocks characterized by an apparent inverted metamorphic gradient, termed the MCT shear zone. Garnet-bearing rocks sampled from across the MCT along the Marysandi River in central Nepal contain monazite that decrease in age from Early Miocene (ca. 20 Ma) in the hanging wall to Late Miocene-Pliocene (ca. 7 Ma and 3 Ma) towards structurally lower levels in the shear zone. We obtained high-resolution garnet-zoning pressure-temperature (P-T) paths from 11 of the same rocks used for monazite geochronology using a recently-developed semi-automated Gibbs-free-energy-minimization technique. Quartz-in-garnet Raman barometry refined the locations of the paths. Diffusional re-equilibration of garnet zoning in hanging wall samples prevented accurate path determinations from most Greater Himalayan Crystalline samples, but one that shows a bell-shaped Mn zoning profile shows a slight decrease in P (from 8.2 to 7.6kbar) with increase in T (from 590 to 640ºC). Three MCT shear zone samples were modeled: one yields a simple path increasing in both P and T (6 to 7kbar, 540 to 580ºC); the others yield N-shaped paths that occupy similar P-T space (4 to 5.5 kbar, 500 to 560ºC). Five lower lesser Himalaya garnet-bearing rocks were modeled. One yields a path increasing in both P-T (6 to 7 kbar, 525 to 550ºC) but others show either sharp compression/decompression or N-shape paths (within 4.5-6 kbar and 530-580ºC). The lowermost sample decreases in P (5.5 to 5 kbar) over increasing T (540 to 580°C). No progressive change is seen from one type of path to another within the Lesser Himalayan Formations to the MCT zone. The results using the modeling approach yield lower P-T conditions compared to the Gibbs method and lower core/rim P-T conditions compared to traditional thermometers and barometers. Inclusion barometry suggests that baric estimates from the modeling may be underestimated by 2-4 kbar. Despite uncertainty, path shapes are consistent with a model in which the MCT shear zone experienced a progressive accretion of footwall slivers.
Quantitative Characterization of Spurious Gibbs Waves in 45 CMIP5 Models
NASA Astrophysics Data System (ADS)
Geil, K. L.; Zeng, X.
2014-12-01
Gibbs oscillations appear in global climate models when representing fields, such as orography, that contain discontinuities or sharp gradients. It has been known for decades that the oscillations are associated with the transformation of the truncated spectral representation of a field to physical space and that the oscillations can also be present in global models that do not use spectral methods. The spurious oscillations are potentially detrimental to model simulations (e.g., over ocean) and this work provides a quantitative characterization of the Gibbs oscillations that appear across the Coupled Model Intercomparison Project Phase 5 (CMIP5) models. An ocean transect running through the South Pacific High toward the Andes is used to characterize the oscillations in ten different variables. These oscillations are found to be stationary and hence are not caused by (physical) waves in the atmosphere. We quantify the oscillation amplitude using the root mean square difference (RMSD) between the transect of a variable and its running mean (rather than the constant mean across the transect). We also compute the RMSD to interannual variability (IAV) ratio, which provides a relative measure of the oscillation amplitude. Of the variables examined, the largest RMSD values exist in the surface pressure field of spectral models, while the smallest RMSD values within the surface pressure field come from models that use finite difference (FD) techniques. Many spectral models have a surface pressure RMSD that is 2 to 15 times greater than IAV over the transect and an RMSD:IAV ratio greater than one for many other variables including surface temperature, incoming shortwave radiation at the surface, incoming longwave radiation at the surface, and total cloud fraction. In general, the FD models out-perform the spectral models, but not all the spectral models have large amplitude oscillations and there are a few FD models where the oscillations do appear. Finally, we present a brief comparison of the numerical methods of a select few models to better understand their Gibbs oscillations.
How reliable are thermodynamic feasibility statements of biochemical pathways?
Maskow, Thomas; von Stockar, Urs
2005-10-20
The driving force for organo- or lithotrophic growth as well as for each step in the metabolic network is the Gibbs reaction energy. For each enzymatic step it must be negative. Thermodynamics contributes therefore to the in-silico description of living systems. It may be used for assessing the feasibility of a given pathway because it provides a further constraint for those pathways which are feasible from the point of view of mass balance calculations (metabolic flux analysis) and the genetic potential of an organism. However, when this constraint was applied to lactic acid fermentation according to a method proposed by Mavrovouniotis (1993a, ISMB 93:273-283) it turned out that an unrealistically wide metabolite concentration range had to be assumed to make this well-known glycolytic pathway thermodynamically feasible. During a search for the reasons of this surprising result the insufficient consideration of the activity coefficients was identified as main cause. However, it is shown in the present contribution that the influence of the activity coefficients on Gibbs reaction energy can be easily taken into account based on the intracellular ionic strength. The uncertainty of the tabulated equilibrium constants and of the apparent standard Gibbs energies derived from them was found to be the second most important reason for the erroneous result of the feasibility analysis. Deviations of intracellular pH from the standard value and bad estimations of currency metabolites, e.g., NAD(+) and NADH, were found to be of lesser importance but not negligible. The pH dependency of Gibbs reaction enthalpy was proved to be easily taken into account. Therefore, the application of thermodynamics for a better in-silico prediction of the behavior of living cell factories calls predominantly for better equilibrium data determined under well defined conditions and also for a more detailed knowledge about the intracellular ionic strength and pH value. Copyright 2005 Wiley Periodicals, Inc.
Pethica, Brian A
2007-12-21
As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the sum of the differences in the non-electrostatic components of the thermodynamic potential difference between regions of different chemical compositions can be measured.
Classical and quantum Reissner-Nordström black hole thermodynamics and first order phase transition
NASA Astrophysics Data System (ADS)
Ghaffarnejad, Hossein
2016-01-01
First we consider classical Reissner-Nordström black hole (CRNBH) metric which is obtained by solving Einstein-Maxwell metric equation for a point electric charge e inside of a spherical static body with mass M. It has 2 interior and exterior horizons. Using Bekenstein-Hawking entropy theorem we calculate interior and exterior entropy, temperature, Gibbs free energy and heat capacity at constant electric charge. We calculate first derivative of the Gibbs free energy with respect to temperature which become a singular function having a singularity at critical point Mc=2|e|/√{3} with corresponding temperature Tc=1/24π√{3|e|}. Hence we claim first order phase transition is happened there. Temperature same as Gibbs free energy takes absolutely positive (negative) values on the exterior (interior) horizon. The Gibbs free energy takes two different positive values synchronously for 0< T< Tc but not for negative values which means the system is made from two subsystem. For negative temperatures entropy reaches to zero value at Tto-∞ and so takes Bose-Einstein condensation single state. Entropy increases monotonically in case 0< T< Tc. Regarding results of the work presented at Wang and Huang (Phys. Rev. D 63:124014, 2001) we calculate again the mentioned thermodynamical variables for remnant stable final state of evaporating quantum Reissner-Nordström black hole (QRNBH) and obtained results same as one in case of the CRNBH. Finally, we solve mass loss equation of QRNBH against advance Eddington-Finkelstein time coordinate and derive luminosity function. We obtain switching off of QRNBH evaporation before than the mass completely vanishes. It reaches to a could Lukewarm type of RN black hole which its final remnant mass is m_{final}=|e| in geometrical units. Its temperature and luminosity vanish but not in Schwarzschild case of evaporation. Our calculations can be take some acceptable statements about information loss paradox (ILP).
A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.
Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang
2018-01-01
The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations
NASA Astrophysics Data System (ADS)
Tak, Hyung Suk
The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repelling, followed by an uphill move in density that aims to make local modes attracting. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode.
System and method for resolving gamma-ray spectra
Gentile, Charles A.; Perry, Jason; Langish, Stephen W.; Silber, Kenneth; Davis, William M.; Mastrovito, Dana
2010-05-04
A system for identifying radionuclide emissions is described. The system includes at least one processor for processing output signals from a radionuclide detecting device, at least one training algorithm run by the at least one processor for analyzing data derived from at least one set of known sample data from the output signals, at least one classification algorithm derived from the training algorithm for classifying unknown sample data, wherein the at least one training algorithm analyzes the at least one sample data set to derive at least one rule used by said classification algorithm for identifying at least one radionuclide emission detected by the detecting device.
Sivakumar, S; Venkatesan, A; Soundhirarajan, P; Khatiwada, Chandra Prasad
2015-12-05
In this research, a chemical precipitation method was used to synthesize undoped and doped cadmium oxide nanoparticles and studied by TG-DTA, XRD, FT-IR, SEM, with EDX and antibacterial activities, respectively. The melting points, thermal stability and the kinetic parameters like entropy (ΔS), enthalpy (ΔH), Gibb's energy (ΔG), activation energy (E), frequency factor (A) were evaluated from TG-DTA measurements. X-ray diffraction analysis (XRD) brought out the information about the synthesized products exist in spherical in shape with cubic structure. The functional groups and band area of the samples were established by Fourier transform infrared (FT-IR) spectroscopy. The direct and indirect band gap energy of pure and doped samples were determined by UV-Vis-DRS. The surface morphological, elemental compositions and particles sizes were evaluated by scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS). Finally, antibacterial activities indicated the Gram-positive and Gram-negative bacteria are more active in transporter, dehydrogenize and periplasmic enzymatic activities of pure and doped samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S
2016-11-01
We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Gibbs free energy difference between the undercooled liquid and the beta phase of a Ti-Cr alloy
NASA Technical Reports Server (NTRS)
Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.
1992-01-01
The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, delta-G, between the liquid and the beta phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta phase in Ti-Cr alloys. The results show that SV of an undistorted beta phase in the Ti60Cr40 alloy at 873 K is not feasible because delta-G is positive at the temperature. However, delta-G may become negative with additional excess free energy to the beta phase in the form of defects.
Effect of Surface Excess Energy Transport on the Rupture of an Evaporating Film
NASA Astrophysics Data System (ADS)
Luo, Yan; Zhou, Jianqiu; Yang, Xia; Liu, Rong
2018-05-01
In most of existing works on the instabilities of an evaporating film, the energy boundary condition only takes into account contributions of the evaporation latent heat and the heat conduction in the liquid. We use a new generalized energy boundary condition at the evaporating liquid-vapor interface, in which the contribution of the transport of the Gibbs excess energy is included. We have derived the long-wave equations in which the thickness of film and the interfacial temperature are coupled to describe the dynamics of an evaporating thin film. The results of our computation show that the transport of the Gibbs excess internal energy delay the rupture of thin films due to van de Waals force, evaporating effect and vapor recoil.
Xiong, Kan; Asher, Sanford A
2010-01-01
We used CD and UV resonance Raman spectroscopy to study the impact of alcohols on the conformational equilibria and relative Gibbs free energy landscapes along the Ramanchandran Ψ-coordinate of a mainly poly-ala peptide, AP of sequence AAAAA(AAARA)3A. 2,2,2-trifluroethanol (TFE) most stabilizes the α-helical-like conformations, followed by ethanol, methanol and pure water. The π-bulge conformation is stabilized more than the α-helix, while the 310-helix is destabilized due to the alcohol increased hydrophobicity. Turns are also stabilized by alcohols. We also found that while TFE induces more α-helices, it favors multiple, shorter helix segments. PMID:20225890
Pozsgay, B; Mestyán, M; Werner, M A; Kormos, M; Zaránd, G; Takács, G
2014-09-12
We study the nonequilibrium time evolution of the spin-1/2 anisotropic Heisenberg (XXZ) spin chain, with a choice of dimer product and Néel states as initial states. We investigate numerically various short-ranged spin correlators in the long-time limit and find that they deviate significantly from predictions based on the generalized Gibbs ensemble (GGE) hypotheses. By computing the asymptotic spin correlators within the recently proposed quench-action formalism [Phys. Rev. Lett. 110, 257203 (2013)], however, we find excellent agreement with the numerical data. We, therefore, conclude that the GGE cannot give a complete description even of local observables, while the quench-action formalism correctly captures the steady state in this case.
Relations between dissipated work and Rényi divergences in the generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Wei, Bo-Bo
2018-04-01
In this work, we show that the dissipation in a many-body system under an arbitrary nonequilibrium process is related to the Rényi divergences between two states along the forward and reversed dynamics under a very general family of initial conditions. This relation generalizes the links between dissipated work and Rényi divergences to quantum systems with conserved quantities whose equilibrium state is described by the generalized Gibbs ensemble. The relation is applicable for quantum systems with conserved quantities and can be applied to protocols driving the system between integrable and chaotic regimes. We demonstrate our ideas by considering the one-dimensional transverse quantum Ising model and the Jaynes-Cummings model which are driven out of equilibrium.
Vapor-liquid phase equilibria of water modelled by a Kim-Gordon potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maerzke, K A; McGrath, M J; Kuo, I W
2009-03-16
Gibbs ensemble Monte Carlo simulations were carried out to investigate the properties of a frozen-electron-density (or Kim-Gordon, KG) model of water along the vapor-liquid coexistence curve. Because of its theoretical basis, such a KG model provides for seamless coupling to Kohn-Sham density functional theory for use in mixed quantum mechanics/molecular mechanics (QM/MM) implementations. The Gibbs ensemble simulations indicate rather limited transferability of such a simple KG model to other state points. Specifically, a KG model that was parameterized by Barker and Sprik to the properties of liquid water at 300 K, yields saturated vapor pressures and a critical temperature thatmore » are significantly under- and over-estimated, respectively.« less
Gibbs-Donnan ratio and channel conductance of Tetrahymena cilia in mixed solution of K+ and Ca2+.
Oosawa, Y; Kasai, M
1988-01-01
A single cation-channel from Tetrahymena cilia was incorporated into planar lipid bilayers. This channel was voltage-independent and is permeable to K+ and Ca2+. In the experiments with mixed solutions where the concentrations of K+ and Ca2+ were varied, the single-channel conductance was found to be influenced by the Gibbs-Donnan ratio. The data are explained by assuming that the binding sites of this channel were always occupied by two potassium ions or one calcium ion under the present experimental conditions (5 mM-90 mM K+ and 0.5 mM-35 mM Ca2+) and these bound cations determined the channel conductivity. PMID:2462927
Generalized Gibbs distribution and energy localization in the semiclassical FPU problem
NASA Astrophysics Data System (ADS)
Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli
2011-03-01
We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.
Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions
Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán
2014-01-01
Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism. PMID:25387603
DFT Studies of SN2 Dechlorination of Polychlorinated Biphenyls.
Krzemińska, Agnieszka; Paneth, Piotr
2016-06-21
Nucleophilic dechlorination of all 209 PCBs congeners by ethylene glycol anion has been studied theoretically at the DFT level. The obtained Gibbs free energies of activation are in the range 7-22 kcal/mol. The reaction Gibbs free energies indicate that all reactions are virtually irreversible. Due to geometric constrains these reactions undergo rather untypical attack with attacking oxygen atom being nearly perpendicular to the attacked C-Cl bond. The most prone to substitution are chlorine atoms that occupy ortho- (2, 2', 6, 6') positions. These results provide extensive information on the PEG/KOH dependent PCBs degradation. They can also be used in further developments of reaction class transition state theory (RC-TST) for description of complex reactive systems encountered for example in combustion processes.
Annealed Importance Sampling Reversible Jump MCMC algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less
Glinski, Donna A; Purucker, S Thomas; Van Meter, Robin J; Black, Marsha C; Henderson, W Matthew
2018-06-18
To study spray drift contributions to non-targeted habitats, pesticide concentrations in stemflow (water flowing down the trunk of a tree during a rain event), throughfall (water from tree canopy only), and surface water in an agriculturally impacted wetland area near Tifton, Georgia, USA were measured (2015-2016). Agricultural fields and sampling locations were on the University of Georgia's Gibbs Research Farm, Tifton, GA. Samples were screened for more than 160 pesticides, and cumulatively, 32 different pesticides were detected across matrices. Data indicate that herbicides and fungicides were present in all types of environmental samples analyzed while insecticides were only detected in surface water samples. The highest pesticide concentration observed was 10.50 μg/L of metolachlor in an August 2015 surface water sample. Metolachlor, tebuconazole, and fipronil were the most frequently detected herbicide, fungicide, and insecticide, respectively, regardless of sample origin. The most frequently detected pesticide in surface water and stemflow samples was metolachlor (0.09-10.5 μg/L), however, the most commonly detected pesticide in throughfall samples was biphenyl (0.02-0.07 μg/L). These data help determine the importance of indirect chemical exposures to non-targeted habitats by assessing inputs from stemflow and throughfall into surface waters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Hoppe, Andreas; Hoffmann, Sabrina; Holzhütter, Hermann-Georg
2007-01-01
Background In recent years, constrained optimization – usually referred to as flux balance analysis (FBA) – has become a widely applied method for the computation of stationary fluxes in large-scale metabolic networks. The striking advantage of FBA as compared to kinetic modeling is that it basically requires only knowledge of the stoichiometry of the network. On the other hand, results of FBA are to a large degree hypothetical because the method relies on plausible but hardly provable optimality principles that are thought to govern metabolic flux distributions. Results To augment the reliability of FBA-based flux calculations we propose an additional side constraint which assures thermodynamic realizability, i.e. that the flux directions are consistent with the corresponding changes of Gibb's free energies. The latter depend on metabolite levels for which plausible ranges can be inferred from experimental data. Computationally, our method results in the solution of a mixed integer linear optimization problem with quadratic scoring function. An optimal flux distribution together with a metabolite profile is determined which assures thermodynamic realizability with minimal deviations of metabolite levels from their expected values. We applied our novel approach to two exemplary metabolic networks of different complexity, the metabolic core network of erythrocytes (30 reactions) and the metabolic network iJR904 of Escherichia coli (931 reactions). Our calculations show that increasing network complexity entails increasing sensitivity of predicted flux distributions to variations of standard Gibb's free energy changes and metabolite concentration ranges. We demonstrate the usefulness of our method for assessing critical concentrations of external metabolites preventing attainment of a metabolic steady state. Conclusion Our method incorporates the thermodynamic link between flux directions and metabolite concentrations into a practical computational algorithm. The weakness of conventional FBA to rely on intuitive assumptions about the reversibility of biochemical reactions is overcome. This enables the computation of reliable flux distributions even under extreme conditions of the network (e.g. enzyme inhibition, depletion of substrates or accumulation of end products) where metabolite concentrations may be drastically altered. PMID:17543097
NASA Astrophysics Data System (ADS)
Nasution, A. B.; Efendi, S.; Suwilo, S.
2018-04-01
The amount of data inserted in the form of audio samples that use 8 bits with LSB algorithm, affect the value of PSNR which resulted in changes in image quality of the insertion (fidelity). So in this research will be inserted audio samples using 5 bits with MLSB algorithm to reduce the number of data insertion where previously the audio sample will be compressed with Arithmetic Coding algorithm to reduce file size. In this research will also be encryption using Triple DES algorithm to better secure audio samples. The result of this research is the value of PSNR more than 50dB so it can be concluded that the image quality is still good because the value of PSNR has exceeded 40dB.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei
2018-02-01
Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.
Structure-guided Protein Transition Modeling with a Probabilistic Roadmap Algorithm.
Maximova, Tatiana; Plaku, Erion; Shehu, Amarda
2016-07-07
Proteins are macromolecules in perpetual motion, switching between structural states to modulate their function. A detailed characterization of the precise yet complex relationship between protein structure, dynamics, and function requires elucidating transitions between functionally-relevant states. Doing so challenges both wet and dry laboratories, as protein dynamics involves disparate temporal scales. In this paper we present a novel, sampling-based algorithm to compute transition paths. The algorithm exploits two main ideas. First, it leverages known structures to initialize its search and define a reduced conformation space for rapid sampling. This is key to address the insufficient sampling issue suffered by sampling-based algorithms. Second, the algorithm embeds samples in a nearest-neighbor graph where transition paths can be efficiently computed via queries. The algorithm adapts the probabilistic roadmap framework that is popular in robot motion planning. In addition to efficiently computing lowest-cost paths between any given structures, the algorithm allows investigating hypotheses regarding the order of experimentally-known structures in a transition event. This novel contribution is likely to open up new venues of research. Detailed analysis is presented on multiple-basin proteins of relevance to human disease. Multiscaling and the AMBER ff14SB force field are used to obtain energetically-credible paths at atomistic detail.
Remote sensing estimation of colored dissolved organic matter (CDOM) in optically shallow waters
NASA Astrophysics Data System (ADS)
Li, Jiwei; Yu, Qian; Tian, Yong Q.; Becker, Brian L.
2017-06-01
It is not well understood how bottom reflectance of optically shallow waters affects the algorithm performance of colored dissolved organic matters (CDOM) retrieval. This study proposes a new algorithm that considers bottom reflectance in estimating CDOM absorption from optically shallow inland or coastal waters. The field sampling was conducted during four research cruises within the Saginaw River, Kawkawlin River and Saginaw Bay of Lake Huron. A stratified field sampling campaign collected water samples, determined the depth at each sampling location and measured optical properties. The sampled CDOM absorption at 440 nm broadly ranged from 0.12 to 8.46 m-1. Field sample analysis revealed that bottom reflectance does significantly change water apparent optical properties. We developed a CDOM retrieval algorithm (Shallow water Bio-Optical Properties algorithm, SBOP) that effectively reduces uncertainty by considering bottom reflectance in shallow waters. By incorporating the bottom contribution in upwelling radiances, the SBOP algorithm was able to explain 74% of the variance of CDOM values (RMSE = 0.22 and R2 = 0.74). The bottom effect index (BEI) was introduced to efficiently separate optically shallow and optically deep waters. Based on the BEI, an adaptive approach was proposed that references the amount of bottom effect in order to identify the most suitable algorithm (optically shallow water algorithm [SBOP] or optically deep water algorithm [QAA-CDOM]) to improve CDOM estimation (RMSE = 0.22 and R2 = 0.81). Our results potentially help to advance the capability of remote sensing in monitoring carbon pools at the land-water interface.
2008-01-01
The kinetics and thermodynamics of binding of transportan 10 (tp10) and four of its variants to phospholipid vesicles, and the kinetics of peptide-induced dye efflux, were compared. Tp10 is a 21-residue, amphipathic, cationic, cell-penetrating peptide similar to helical antimicrobial peptides. The tp10 variants examined include amidated and free peptides, and replacements of tyrosine by tryptophan. Carboxy-terminal amidation or substitution of tryptophan for tyrosine enhance binding and activity. The Gibbs energies of peptide binding to membranes determined experimentally and calculated from the interfacial hydrophobicity scale are in good agreement. The Gibbs energy for insertion into the bilayer core was calculated using hydrophobicity scales of residue transfer from water to octanol and to the membrane/water interface. Peptide-induced efflux becomes faster as the Gibbs energies for binding and insertion of the tp10 variants decrease. If anionic lipids are included, binding and efflux rate increase, as expected because all tp10 variants are cationic and an electrostatic component is added. Whether the most important effect of peptide amidation is the change in charge or an enhancement of helical structure, however, still needs to be established. Nevertheless, it is clear that the changes in efflux rate reflect the differences in the thermodynamics of binding and insertion of the free and amidated peptide groups. PMID:18260641
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Seal, R.R.; Inan, E.E.; Hemingway, B.S.
2001-01-01
The Gibbs free energy of formation of nukundamite (Cu3.38Fe0.62S4) was calculated from published experimental studies of the reaction 3.25 Cu3.38Fe0.62S4 + S2 = 11 CuS + 2 FeS2 in order to correct an erroneous expression in the published record. The correct expression describing the Gibbs free energy of formation (kJ???mol-1) of nukundamite relative to the elements and ideal S2 gas is ??fG?? nukundamite T(K) = -549.75 + 0.23242 T + 3.1284 T0.5, with an uncertainty of 0.6%. An evaluation of the phase equilibria of nukundamite with associated phases in the system Cu-Fe-S as a function of temperature and sulfur fugacity indicates that nukundamite is stable from 224 to 501??C at high sulfidation states. At its greatest extent, at 434??C, the stability field of nukundamite is only 0.4 log f(S2) units wide, which explains its rarity. Equilibria between nukundamite and bornite, which limit the stability of both phases, involve bornite compositions that deviate significantly from stoichiometric Cu5FeS4. Under equilibrium conditions in the system Cu-Fe-S, nukundamite + chalcopyrite is not a stable assemblage at any temperature.
NASA Astrophysics Data System (ADS)
Das, Shreya; Nag, S. K.
2017-09-01
The present study has been carried out covering two blocks—Suri I and II in Birbhum district, West Bengal, India. The evaluation focuses on occurrence, distribution and geochemistry in 26 water samples collected from borewells spread across the entire study area homogeneously. Quantitative chemical analysis of groundwater samples collected from the present study area has shown that samples from two locations—Gangta and Dhalla contain fluoride greater than the permissible limit prescribed by WHO during both post-monsoon and pre-monsoon sampling sessions. Significant factor controlling geochemistry of groundwater has been identified to be rock-water interaction processes during both sampling sessions based on the results of Gibb's diagrams. Geochemical modeling studies have revealed that fluorite (CaF2) is, indeed, present as a significant fluoride-bearing mineral in the groundwaters of this study area. Calcite or CaCO3 is one of the most common minerals with which fluorite remains associated, and saturation index calculations have revealed that the calcite-fluorite geochemistry is the dominant factor controlling fluoride concentration in this area during both post- and pre-monsoon. High fluoride waters have also been found to be of `bicarbonate' type showing increase of sodium in water with decrease of calcium.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
NASA Technical Reports Server (NTRS)
Chang, H.
1976-01-01
A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.
Robust algorithm for aligning two-dimensional chromatograms.
Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel
2012-11-06
Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.
Phase Equilibria and Thermodynamic Descriptions of Ag-Ge and Ag-Ge-Ni Systems
NASA Astrophysics Data System (ADS)
Rajkumar, V. B.; Chen, Sinn-Wen
2018-07-01
Gibbs energy modeling of Ag-Ge and Ag-Ge-Ni systems was done using the calculation of the phase diagram method with associated data from this work and relevant literature information. In the Ag-Ge system, the solidus temperatures of Ag-rich alloys are measured using differential thermal analysis, and the energy of mixing for the FCC_A1 phase is calculated using the special quasi-random structures technique. The isothermal sections of the Ag-Ge-Ni system at 1023 K and 673 K are also experimentally determined. These data and findings in the relevant literature are used to model the Gibbs energy of the Ag-Ge and Ag-Ge- Ni systems. A reaction scheme and a liquidus projection of the Ag-Ge-Ni system are determined.
Thermodynamic Study of the Nickel Addition in Zinc Hot-Dip Galvanizing Baths
NASA Astrophysics Data System (ADS)
Pistofidis, N.; Vourlias, G.
2010-01-01
A usual practice during zinc hot-dip galvanizing is the addition of nickel in the liquid zinc which is used to inhibit the Sandelin effect. Its action is due to the fact that the ζ (zeta) phase of the Fe-Zn system is replaced by the Τ (tau) phase of the Fe-Zn-Ni system. In the present work an attempt is made to explain the formation of the Τ phase with thermodynamics. For this reason the Gibbs free energy changes for Τ and ζ phases were calculated. The excess free energy for the system was calculated with the Redlich-Kister polyonyme. From this calculation it was deduced that the Gibbs energy change for the tau phase is negative. As a result its formation is spontaneous.
NASA Astrophysics Data System (ADS)
Sudolská, Mária; Cantrel, Laurent; Budzák, Šimon; Černušák, Ivan
2014-03-01
Monohydrated complexes of iodine species (I, I2, HI, and HOI) have been studied by correlated ab initio calculations. The standard enthalpies of formation, Gibbs free energy and the temperature dependence of the heat capacities at constant pressure were calculated. The values obtained have been implemented in ASTEC nuclear accident simulation software to check the thermodynamic stability of hydrated iodine compounds in the reactor coolant system and in the nuclear containment building of a pressurised water reactor during a severe accident. It can be concluded that iodine complexes are thermodynamically unstable by means of positive Gibbs free energies and would be represented by trace level concentrations in severe accident conditions; thus it is well justified to only consider pure iodine species and not hydrated forms.
Estimation hydrophilic-lipophilic balance number of surfactants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawignya, Harsa, E-mail: harsa-paw@yahoo.co.id; Chemical Engineering Departement University of Pembangunan Nasional Yogyakarta; Prasetyaningrum, Aji, E-mail: ajiprasetyaningrum@gmail.com
Any type of surfactant has a hydrophilic-lipophilic balance number (HLB number) of different. There are several methods for determining the HLB number, with ohysical properties of surfactant (solubility cloud point and interfacial tension), CMC methods and by thermodynamics properties (Free energy Gibbs). This paper proposes to determined HLB numbers from interfelation methods. The result of study indicated that the CMC method described by Hair and Moulik espesially for nonionic surfactant. The application of exess Gibbs free energy and by implication activity coefficient provides the ability to predict the behavior of surfactants in multi component mixtures of different concentration. Determination ofmore » HLB number by solubility and cloud point parameter is spesific for anionic and nonionic surfactant but this methods not available for cationic surfactants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobolev, S. L., E-mail: sobolev@icp.ac.ru
An analytical model has been developed to describe the influence of solute trapping during rapid alloy solidification on the components of the Gibbs free energy change at the phase interface with emphasis on the solute drag energy. For relatively low interface velocity V < V{sub D}, where V{sub D} is the characteristic diffusion velocity, all the components, namely mixing part, local nonequilibrium part, and solute drag, significantly depend on solute diffusion and partitioning. When V ≥ V{sub D}, the local nonequilibrium effects lead to a sharp transition to diffusionless solidification. The transition is accompanied by complete solute trapping and vanishingmore » solute drag energy, i.e. partitionless and “dragless” solidification.« less
Tsallis thermostatistics for finite systems: a Hamiltonian approach
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Moreira, Andrã© A.; Andrade, José S., Jr.; Almeida, Murilo P.
2003-05-01
The derivation of the Tsallis generalized canonical distribution from the traditional approach of the Gibbs microcanonical ensemble is revisited (Phys. Lett. A 193 (1994) 140). We show that finite systems whose Hamiltonians obey a generalized homogeneity relation rigorously follow the nonextensive thermostatistics of Tsallis. In the thermodynamical limit, however, our results indicate that the Boltzmann-Gibbs statistics is always recovered, regardless of the type of potential among interacting particles. This approach provides, moreover, a one-to-one correspondence between the generalized entropy and the Hamiltonian structure of a wide class of systems, revealing a possible origin for the intrinsic nonlinear features present in the Tsallis formalism that lead naturally to power-law behavior. Finally, we confirm these exact results through extensive numerical simulations of the Fermi-Pasta-Ulam chain of anharmonic oscillators.
NASA Astrophysics Data System (ADS)
Barber, Duncan Henry
During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A benchmark calculation demonstrates the improvement in agreement of the total inventory of those chemical elements included in the RMC fuel model to an ORIGEN-S calculation. ORIGEN-S is the Oak Ridge isotope generation and depletion computer program. The Gibbs energy minimizer requires a chemical database containing coefficients from which the Gibbs energy of pure compounds, gas and liquid mixtures, and solid solutions can be calculated. The RMC model of irradiated uranium dioxide fuel has been converted into the required format. The Gibbs energy minimizer has been incorporated into a new model of fission-product vaporization from the fuel surface. Calculated release fractions using the new code have been compared to results calculated with SOURCE IST 2.0P11 and to results of tests used in the validation of SOURCE 2.0. The new code shows improvements in agreement with experimental releases for a number of nuclides. Of particular significance is the better agreement between experimental and calculated release fractions for 140La. The improved agreement reflects the inclusion in the RMC model of the solubility of lanthanum (III) oxide (La2O3) in the fuel matrix. Calculated lanthanide release fractions from earlier computer programs were a challenge to environmental qualification analysis of equipment for some accident scenarios. The new prototype computer program would alleviate this concern. Keywords: Nuclear Engineering; Material Science; Thermodynamics; Radioactive Material, Gibbs Energy Minimization, Actinide Generation and Depletion, FissionProduct Generation and Depletion.
Computation of thermodynamic equilibrium in systems under stress
NASA Astrophysics Data System (ADS)
Vrijmoed, Johannes C.; Podladchikov, Yuri Y.
2016-04-01
Metamorphic reactions may be partly controlled by the local stress distribution as suggested by observations of phase assemblages around garnet inclusions related to an amphibolite shear zone in granulite of the Bergen Arcs in Norway. A particular example presented in fig. 14 of Mukai et al. [1] is discussed here. A garnet crystal embedded in a plagioclase matrix is replaced on the left side by a high pressure intergrowth of kyanite and quartz and on the right side by chlorite-amphibole. This texture apparently represents disequilibrium. In this case, the minerals adapt to the low pressure ambient conditions only where fluids were present. Alternatively, here we compute that this particular low pressure and high pressure assemblage around a stressed rigid inclusion such as garnet can coexist in equilibrium. To do the computations we developed the Thermolab software package. The core of the software package consists of Matlab functions that generate Gibbs energy of minerals and melts from the Holland and Powell database [2] and aqueous species from the SUPCRT92 database [3]. Most up to date solid solutions are included in a general formulation. The user provides a Matlab script to do the desired calculations using the core functions. Gibbs energy of all minerals, solutions and species are benchmarked versus THERMOCALC, PerpleX [4] and SUPCRT92 and are reproduced within round off computer error. Multi-component phase diagrams have been calculated using Gibbs minimization to benchmark with THERMOCALC and Perple_X. The Matlab script to compute equilibrium in a stressed system needs only two modifications of the standard phase diagram script. Firstly, Gibbs energy of phases considered in the calculation is generated for multiple values of thermodynamic pressure. Secondly, for the Gibbs minimization the proportion of the system at each particular thermodynamic pressure needs to be constrained. The user decides which part of the stress tensor is input as thermodynamic pressure. To compute a case of high and low pressure around a stressed inclusion we first did a Finite Element Method calculation of a rigid inclusion in a viscous matrix under simple shear. From the computed stress distribution we took the local pressure (mean stress) in each grid point of the FEM calculation. This was used as input thermodynamic pressure in the Gibbs minimization and the result showed it is possible to have an equilibrium situation in which chlorite-amphibole is stable in the low pressure domain and kyanite in the high pressure domain of the stress field around the inclusion. Interestingly, the calculation predicts the redistribution of fluid from an average content of fluid in the system. The fluid in equilibrium tends to accumulate in the low pressure areas whereas it leaves the high pressure areas dry. Transport of fluid components occurs not necessarily by fluid flow, but may happen for example by diffusion. We conclude that an apparent disequilibrium texture may be explained by equilibrium under pressure variations, and apparent fluid addition by redistribution of fluid controlled by the local stress distribution. [1] Mukai et al. (2014), Journal of Petrology, 55 (8), p. 1457-1477. [2] Holland and Powell (1998), Journal of Metamorphic Geology, 16, p. 309-343 [3] Johnson et al. (1992), Computers & Geosciences, 18 (7), p. 899-947 [4] Connolly (2005), Earth and Planetary Science Letters, 236, p. 524-541
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
NASA Astrophysics Data System (ADS)
Cook, April B.; Sutton, Tracey T.; Galbraith, John K.; Vecchione, Michael
2013-12-01
Only a miniscule fraction of the world’s largest volume of living space, the ocean’s midwater biome, has ever been sampled. As part of the International Census of Marine Life field project on Mid-Atlantic Ridge ecosystems (MAR-ECO), a discrete-depth trawling survey was conducted in 2009 aboard the NOAA FSV Henry B. Bigelow to examine the pelagic faunal assemblage structure and distribution over the Charlie-Gibbs Fracture Zone (CGFZ) of the northern Mid-Atlantic Ridge. Day/night sampling at closely spaced stations allowed the first characterization of diel vertical migration of pelagic nekton over the MAR-ECO study area. Discrete-depth sampling from 0-3000 m was conducted using a Norwegian “Krill” trawl with five codends that were opened and closed via a pre-programmed timer. Seventy-five species of fish were collected, with a maximum diversity and biomass observed between depths of 700-1900 m. A gradient in sea-surface temperature and underlying watermasses, from northwest to southeast, was mirrored by a similar gradient in ichthyofaunal diversity. Using multivariate analyses, eight deep-pelagic fish assemblages were identified, with depth as the primary discriminatory variable. Strong diel vertical migration (DVM) of the mesopelagic fauna was a prevalent feature of the study area, though the numerically dominant fish, Cyclothone microdon (Gonostomatidae), exhibited a broad (0-3000 m) vertical distribution and did not appear to migrate on a diel basis. Three patterns of vertical distribution were observed in the study area: (a) DVM of mesopelagic, and possibly bathypelagic, taxa; (b) broad vertical distribution spanning meso- and bathypelagic depths; and (c) discrete vertical distribution within a limited depth range. Overall species composition and rank order of abundance of fish species agreed with two previous expeditions to the CGFZ (1982-1983 and 2004), suggesting some long-term consistency in the ichthyofaunal composition of the study area, at least in the summer. Frequent captures of putative bathypelagic fishes, shrimps, and cephalopods in the epipelagic zone (0-200 m) were confirmed. The results of this expedition reveal distributional patterns unlike those previously reported for open-ocean ecosystems, with the implication of increased transfer efficiency of surface production to great depths in the mid-North Atlantic.
Note on in situ (scanning) transmission electron microscopy study of liquid samples.
Jiang, Nan
2017-08-01
Liquid cell (scanning) transmission electron microscopy has been developed rapidly, using amorphous SiN x membranes as electron transparent windows. The current interpretations of electron beam effects are mainly based on radiolytic processes. In this note, additional effects of the electric field due to electron-beam irradiation are discussed. The electric field can be produced by the charge accumulation due to the emission of secondary and Auger electrons. Besides various beam-induced phenomena, such as nanoparticle precipitation and gas bubble formation and motion, two other effects need to be considered; one is the change of Gibbs free energy of nucleation and the other is the violation of Brownian motion due to ion drifting driven by the electric field. Copyright © 2017 Elsevier B.V. All rights reserved.
Refined two-index entropy and multiscale analysis for complex system
NASA Astrophysics Data System (ADS)
Bian, Songhan; Shang, Pengjian
2016-10-01
As a fundamental concept in describing complex system, entropy measure has been proposed to various forms, like Boltzmann-Gibbs (BG) entropy, one-index entropy, two-index entropy, sample entropy, permutation entropy etc. This paper proposes a new two-index entropy Sq,δ and we find the new two-index entropy is applicable to measure the complexity of wide range of systems in the terms of randomness and fluctuation range. For more complex system, the value of two-index entropy is smaller and the correlation between parameter δ and entropy Sq,δ is weaker. By combining the refined two-index entropy Sq,δ with scaling exponent h(δ), this paper analyzes the complexities of simulation series and classifies several financial markets in various regions of the world effectively.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matulef, Kevin Michael
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewermore » resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.« less
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
Cure fraction model with random effects for regional variation in cancer survival.
Seppä, Karri; Hakulinen, Timo; Kim, Hyon-Jung; Läärä, Esa
2010-11-30
Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause-specific survival data of female breast cancer patients collected by the population-based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non-cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis-within-Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter-expanded model with tailor-made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10-year cause-specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival. Copyright © 2010 John Wiley & Sons, Ltd.
Time accurate application of the MacCormack 2-4 scheme on massively parallel computers
NASA Technical Reports Server (NTRS)
Hudson, Dale A.; Long, Lyle N.
1995-01-01
Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.
Bernatowicz, Piotr; Nowakowski, Michał; Dodziuk, Helena; Ejchart, Andrzej
2006-08-01
Association constants in weak molecular complexes can be determined by analysis of chemical shifts variations resulting from changes of guest to host concentration ratio. In the regime of very fast exchange, i.e., when exchange rate is several orders of magnitude larger than the Larmor angular frequency difference of the observed resonance in free and complexed molecule, the apparent position of averaged resonance is a population-weighted mean of resonances of particular forms involved in the equilibrium. The assumption of very fast exchange is often, however, tacitly admitted in literature even in cases where the process of interest is much slower than required. We show that such an unjustified simplification may, under certain circumstances, lead to significant underestimation of association constant and, in consequence, to non-negligible errors in Gibbs free energy under determination. We present a general method, based on iterative numerical NMR line shape analysis, which allows one for the compensation of chemical exchange effects, and delivers both the correct association constants and the exchange rates. The latter are not delivered by the other mentioned method. Practical application of our algorithm is illustrated by the case of camphor-alpha-cyclodextrin complexes.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Small-Noise Analysis and Symmetrization of Implicit Monte Carlo Samplers
Goodman, Jonathan; Lin, Kevin K.; Morzfeld, Matthias
2015-07-06
Implicit samplers are algorithms for producing independent, weighted samples from multivariate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algorithms that leads to improved implicit sampling schemes at a relatively small additional cost. Here, computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Phase relations in the system Cu-Gd-O and Gibbs energy of formation of CuGd[sub 2]O[sub 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, K.T.; Mathews, T.; Hajra, J.P.
1993-07-01
The phase relations in the system Cu-Gd-O have been determined at 1,273 K by X-ray diffraction, optical microscopy, and electron microprobe analysis of samples equilibrated in quartz ampules and in pure oxygen. Only one ternary compound, CuGd[sub 2]O[sub 4], was found to be stable. The Gibbs free energy of formation of this compound has been measured using the solid-state cell Pt, Cu[sub 2]O + CuGd[sub 2]O[sub 4] + Gd[sub 2]O[sub 3]//(Y[sub 2]O[sub 3])ZrO[sub 2]//CuO + Cu[sub 2]O, Pt in the temperature range of 900 to 1,350 K. For the formation of CuGd[sub 2]O[sub 4] from its binary component oxides, CuOmore » (s) + Gd[sub 2]O[sub 3] (s) [r arrow] CuGd[sub 2]O[sub 4] (s) [Delta]G[degree] = 8230 - 11.2T([plus minus]50)J/mol. Since the formation is endothermic, CuGd[sub 2]O[sub 4] becomes thermodynamically unstable with respect to CuO and Gd[sub 2]O[sub 3] below 735 K. When the oxygen partial pressure over CuGd[sub 2]O[sub 4] is lowered, it decomposes according to the reaction 4CuGd[sub 2]O[sub 4] (s) [r arrow] 4Gd[sub 2]O[sub 3] (s) + 2Cu[sub 2]O (s) + O[sub 2] (g) for which the equilibrium oxygen potential is given by [Delta][mu][sub o][sub 2] = [minus]227,970 + 143.2T([plus minus]500)J/mol. An oxygen potential diagram for the system Cu-Gd-O at 1,273 is presented.« less
Osburn, Magdalena R.; LaRowe, Douglas E.; Momper, Lily M.; Amend, Jan P.
2014-01-01
The deep subsurface is an enormous repository of microbial life. However, the metabolic capabilities of these microorganisms and the degree to which they are dependent on surface processes are largely unknown. Due to the logistical difficulty of sampling and inherent heterogeneity, the microbial populations of the terrestrial subsurface are poorly characterized. In an effort to better understand the biogeochemistry of deep terrestrial habitats, we evaluate the energetic yield of chemolithotrophic metabolisms and microbial diversity in the Sanford Underground Research Facility (SURF) in the former Homestake Gold Mine, SD, USA. Geochemical data, energetic modeling, and DNA sequencing were combined with principle component analysis to describe this deep (down to 8100 ft below surface), terrestrial environment. SURF provides access into an iron-rich Paleoproterozoic metasedimentary deposit that contains deeply circulating groundwater. Geochemical analyses of subsurface fluids reveal enormous geochemical diversity ranging widely in salinity, oxidation state (ORP 330 to −328 mV), and concentrations of redox sensitive species (e.g., Fe2+ from near 0 to 6.2 mg/L and Σ S2- from 7 to 2778μg/L). As a direct result of this compositional buffet, Gibbs energy calculations reveal an abundance of energy for microorganisms from the oxidation of sulfur, iron, nitrogen, methane, and manganese. Pyrotag DNA sequencing reveals diverse communities of chemolithoautotrophs, thermophiles, aerobic and anaerobic heterotrophs, and numerous uncultivated clades. Extrapolated across the mine footprint, these data suggest a complex spatial mosaic of subsurface primary productivity that is in good agreement with predicted energy yields. Notably, we report Gibbs energy normalized both per mole of reaction and per kg fluid (energy density) and find the later to be more consistent with observed physiologies and environmental conditions. Further application of this approach will significantly expand our understanding of the deep terrestrial biosphere. PMID:25429287
The Even-Rho and Even-Epsilon Algorithms for Accelerating Convergence of a Numerical Sequence
1981-12-01
equal, leading to zero or very small divisors. Computer programs implementing these algorithms are given along with sample output. An appreciable amount...calculation of the array of Shank’s transforms or, -A equivalently, of the related Padd Table. The :other, the even-rho algorithm, is closely related...leading to zero or very small divisors. Computer pro- grams implementing these algorithms are given along with sample output. An appreciable amount or
Entropy Analyses of Four Familiar Processes.
ERIC Educational Resources Information Center
Craig, Norman C.
1988-01-01
Presents entropy analysis of four processes: a chemical reaction, a heat engine, the dissolution of a solid, and osmosis. Discusses entropy, the second law of thermodynamics, and the Gibbs free energy function. (MVL)
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Rosenholm, Jarl B
2018-03-01
The perfect gas law is used as a reference when selecting state variables (P, V, T, n) needed to characterize ideal gases (vapors), liquids and solids. Van der Waals equation of state is used as a reference for models characterizing interactions in liquids, solids and their mixtures. Van der Waals loop introduces meta- and unstable states between the observed gas (vapor)-liquid P-V transitions at low T. These intermediate states are shown to appear also between liquid-liquid, liquid-solid and solid-solid phase transitions. First-order phase transitions are characterized by a sharp discontinuity of first-order partial derivatives (P, S, V) of Helmholtz and Gibbs free energies. Second-order partial derivatives (K T , B, C V , C P , E) consist of a static contribution relating to second-order phase transitions and a relaxation contribution representing the degree of first-order phase transitions. Bimodal (first-order) and spinodal (second-order) phase boundaries are used to separate stable phases from metastable and unstable phases. The boundaries are identified and quantified by partial derivatives of molar Gibbs free energy or chemical potentials with respect to P, S, V and composition (mole fractions). Molecules confined to spread Langmuir monolayers or adsorbed Gibbs monolayers are characterized by equation of state and adsorption isotherms relating to a two-dimensional van der Waals equation of state. The basic work of two-dimensional wetting (cohesion, adsorption, spreading, immersion), have to be adjusted by a horizontal surface pressure in the presence of adsorbed vapor layers. If the adsorption is extended to liquid films a vertical surface pressure (Π) may be added to account for the lateral interaction, thus restoring PV = ΠAh dependence of thin films. Van der Waals attraction, Coulomb repulsion and structural hydration forces contribute to the vertical surface pressure. A van der Waals type coexistence of ordered (dispersed) and disordered (aggregated) phases is shown to exist when liquid vapor is confined in capillaries (condensation-liquefaction-evaporation and flux). This pheno-menon can be experimentally illustrated with suspended nano-sized particles (flocculation-coagulation-peptisation of colloidal sols) being confined in sample holders of varying size. The self-assembled aggregates represent critical self-similar equilibrium structures corres-ponding to rate determining complexes in kinetics. Overall, a self-consistent thermodynamic framework is established for the characterization of two- and three-dimensional phase separations in one-, two- and three-component systems. Copyright © 2018 Elsevier B.V. All rights reserved.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de
2017-11-05
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.
Shu, Tongxin; Xia, Min; Chen, Jiahong; de Silva, Clarence
2017-01-01
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy. PMID:29113087
Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun
2017-03-23
The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.
Efficient Classical Algorithm for Boson Sampling with Partially Distinguishable Photons
NASA Astrophysics Data System (ADS)
Renema, J. J.; Menssen, A.; Clements, W. R.; Triginer, G.; Kolthammer, W. S.; Walmsley, I. A.
2018-06-01
We demonstrate how boson sampling with photons of partial distinguishability can be expressed in terms of interference of fewer photons. We use this observation to propose a classical algorithm to simulate the output of a boson sampler fed with photons of partial distinguishability. We find conditions for which this algorithm is efficient, which gives a lower limit on the required indistinguishability to demonstrate a quantum advantage. Under these conditions, adding more photons only polynomially increases the computational cost to simulate a boson sampling experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Yu, Qiang; Wei, Dingbang; Huo, Hongwei
2018-06-18
Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.
Improving Communication Within a Managerial Workgroup
ERIC Educational Resources Information Center
Harvey, Jerry B.; Boettger, C. Russell
1971-01-01
This paper describes an experiment involving the use of laboratory education (Bradford, Gibb, & Benne, 1964; Bennis & Schein, 1965) and was designed on the assumption that improvement of communication in managerial workgroups enhances task effectiveness. (Author)
Cathedral house & crocker fence, Taylor Street east and north ...
Cathedral house & crocker fence, Taylor Street east and north elevations, perspective view from the northeast - Grace Cathedral, George William Gibbs Memorial Hall, 1051 Taylor Street, San Francisco, San Francisco County, CA
Remediation System Evaluation, MacGillis and Gibbs Superfund Site
The site was a wood preserving facility that is no longer active. Key contaminants at the site includepentachlorophenol (PCP), chromium, and to a much lesser extent dioxin, arsenic, and polynucleararomatic hydrocarbons (PAHs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagan, D. N., E-mail: d.n.kagan@mtu-net.ru; Krechetova, G. A.; Shpil'rain, E. E.
A detailed procedural analysis is given and results of implementation of the new version of the effusion method for determining the Gibbs energy (thermodynamic activity) of binary and ternary systems of alkali metals Cs-Na, K-Na, Cs-K, and Cs-K-Na are presented. The activity is determined using partial pressures of the components measured according the effusion method by the intensity of their atomic beams. The pressure range used in the experiment is intermediate between the Knudsen and hydrodynamic effusion modes. A generalized version of the effusion method involves the pressure range beyond the limits of the applicability of the Hertz-Knudsen equation. Employmentmore » of this method provides the differential equation of chemical thermodynamics; solution of this equation makes it possible to construct the Gibbs energy in the range of temperatures 400 {<=} T {<=} 1200 K and concentrations 0 {<=} x{sub i} {<=} 1.« less
NASA Astrophysics Data System (ADS)
Suthar, Shyam Sunder; Purohit, Suresh
2018-05-01
Properties of diesel and biodiesel (produced from corn oil) are used. Densities and viscosities of binary mixture of diesel with biodiesel (produced from corn oil) have been computed by using liquid binary mixture law over the entire range of compositions at T=298.15K and atmospheric pressure. From the computed values of density and viscosities, viscosity deviation (Δη), the excess molar volume (VE) and excess Gibbs energy of activation of viscous flow (ΔG#E) have been calculated. The results of excess volume, excess Gibbs energy of activation of viscous flow and viscosity deviation have been fitted to Redlich -Kister models to estimate the binary coefficients. The results are communicated in terms of the molecular interactions and the best suited composition has been found.
Modeling Ignition of HMX with the Gibbs Formulation
NASA Astrophysics Data System (ADS)
Lee, Kibaek; Stewart, D. Scott
2017-06-01
We present a HMX model with the Gibbs formulation in which stress tensor and temperature are assumed to be in local equilibrium, but phase/chemical changes are not assumed to be in equilibrium. We assume multi-components for HMX including beta- and delta-phase, liquid, and gas phase of HMX and its gas products. Isotropic small strain solid model, modified Fried Howard liquid EOS, and ideal gas EOS are used for its relevant component. Phase/chemical changes are characterized as reactions and are in individual reaction rate. Maxwell-Stefan model is used for diffusion. Excited gas products in the local domain lead unreacted HMX solid to the ignition event. Density of the mixture, stress, strain, displacement, mass fractions, and temperature are considered in 1D domain with time histories. Office of Naval Research and Air Force Office of Scientific Research.
NASA Astrophysics Data System (ADS)
Manikandan, P.; Trinadh, V. V.; Bera, Suranjan; Narasimhan, T. S. Lakshmi; Joseph, M.
2016-07-01
Vaporisation studies over gallium rich biphasic regions (U3Ga5 + UGa2) and (UGa2 + UGa3) in the Usbnd Ga system were carried out by Knusen effusion mass spectrometry in the temperature ranges of 1208-1366 K and 1133-1338 K, respectively. Ga(g) was the species observed in the mass spectra of the equilibrium vapour over both phase regions. From temperature dependence measurements, pressure-temperature relations were deduced as: log (pGa/Pa) = (-18216 ± 239)/(T/K) + (12.88 ± 0.18) over (U3Ga5 + UGa2) and log (pGa/Pa) = (-16225 ± 124)/(T/K) + (11.78 ± 0.10) over (UGa2 + UGa3). From these data, Gibbs free energy changes for the reactions 3UGa2(s) = U3Ga5(s) + Ga(g) and UGa3(s) = UGa2(s) + Ga(g) were computed and subsequently Gibbs free energies of formation of U3Ga5(s) and UGa3(s) were deduced as ΔfGTo U3Ga5(s) (±5.5) = -352.4 + 0.133 T(K) (kJ mol-1) (1208-1366 K) and ΔfGTo UGa3(s) (±3.8) = -191.9 + 0.082 T(K) (kJ mol-1) (1133-1338 K). The Gibbs free energy of formation of U3Ga5(s) is being reported for the first time.
NASA Astrophysics Data System (ADS)
Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano
2017-06-01
Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.
Thermodynamic properties of calcium-bismuth alloys determined by emf measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Boysen, DA; Bradwell, DJ
2012-01-15
The thermodynamic properties of Ca-Bi alloys were determined by electromotive force (emf) measurements to assess the suitability of Ca-Bi electrodes for electrochemical energy storage applications. Emf was measured at ambient pressure as a function of temperature between 723 K and 1173 K using a Ca(s)vertical bar CaF2(s)vertical bar Ca(in Bi) cell for twenty different Ca-Bi alloys spanning the entire range of composition from chi(Ca) = 0 to 1. Reported are the temperature-independent partial molar entropy and enthalpy of calcium for each Ca-Bi alloy. Also given are the measured activities of calcium, the excess partial molar Gibbs energy of bismuth estimatedmore » from the Gibbs-Duhem equation, and the integral change in Gibbs energy for each Ca-Bi alloy at 873 K, 973 K, and 1073 K. Calcium activities at 973 K were found to be nearly constant at a value a(Ca) = 1 x 10(-8) over the composition range chi(Ca) = 0.32-0.56, yielding an emf of similar to 0.77 V. Above chi(Ca) = 0.62 and coincident with Ca5Bi3 formation, the calcium activity approached unity. The Ca-Bi system was also characterized by differential scanning calorimetry over the entire range of composition. Based upon these data along with the emf measurements, a revised Ca-Bi binary phase diagram is proposed. (C) 2011 Elsevier Ltd. All rights reserved.« less
Latella, Ivan; Pérez-Madrid, Agustín
2013-10-01
The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.
New active asteroid 313P/Gibbs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jewitt, David; Hui, Man-To; Li, Jing
We present initial observations of the newly discovered active asteroid 313P/Gibbs (formerly P/2014 S4), taken to characterize its nucleus and comet-like activity. The central object has a radius ∼0.5 km (geometric albedo 0.05 assumed). We find no evidence for secondary nuclei and set (with qualifications) an upper limit to the radii of such objects near 20 m, assuming the same albedo. Both aperture photometry and a morphological analysis of the ejected dust show that mass-loss is continuous at rates ∼0.2–0.4 kg s{sup −1}, inconsistent with an impact origin. Large dust particles, with radii ∼50–100 μm, dominate the optical appearance. Atmore » 2.4 AU from the Sun, the surface equilibrium temperatures are too low for thermal or desiccation stresses to be responsible for the ejection of dust. No gas is spectroscopically detected (limiting the gas mass-loss rate to <1.8 kg s{sup −1}). However, the protracted emission of dust seen in our data and the detection of another episode of dust release near perihelion, in archival observations from 2003, are highly suggestive of an origin by the sublimation of ice. Coincidentally, the orbit of 313P/Gibbs is similar to those of several active asteroids independently suspected to be ice sublimators, including P/2012 T1, 238P/Read, and 133P/Elst–Pizarro, suggesting that ice is abundant in the outer asteroid belt.« less
Third Bose fugacity coefficient in one dimension, as a function of asymptotic quantities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amaya-Tapia, A., E-mail: jano@fis.unam.mx; Larsen, S.Y.; Lassaut, M.
2011-02-15
In one of the very few exact quantum mechanical calculations of fugacity coefficients, [L.R. Dodd, A.M. Gibbs. J. Math. Phys. 15 (1974) 41] obtained b{sub 2} and b{sub 3} for a one dimensional Bose gas, subject to repulsive delta-function interactions, by direct integration of the wave functions. For b{sub 2}, we have shown [A. Amaya-Tapia, S.Y. Larsen, M. Lassaut. Mol. Phys. 103 (2005) 1301-1306. < (arXiv:physics/0405150)>] that Dodd and Gibbs' result can be obtained from a phase shift formalism, if one also includes the contribution of oscillating terms, usually contributing only in one dimension. Now, we develop an exact expressionmore » for b{sub 3}-b{sub 3}{sup 0} (where b{sub 3}{sup 0} is the free particle fugacity coefficient) in terms of sums and differences of three-body eigenphase shifts. Further, we show that if we obtain these eigenphase shifts in a Distorted-Born approximation, then, to first order, we reproduce the leading low temperature behaviour, obtained from an expansion of the twofold integral of Dodd and Gibbs. The contributions of the oscillating terms cancel. The formalism that we propose is not limited to one dimension, but seeks to provide a general method to obtain virial coefficients, fugacity coefficients, in terms of asymptotic quantities. The exact one dimensional results allow us to confirm the validity of our approach in this domain.« less
Simple-random-sampling-based multiclass text classification algorithm.
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.
NASA Astrophysics Data System (ADS)
Krishna Kumar, S.; Hari Babu, S.; Eswar Rao, P.; Selvakumar, S.; Thivya, C.; Muralidharan, S.; Jeyabal, G.
2017-09-01
Water quality of Tiruvallur Taluk of Tiruvallur district, Tamil Nadu, India has been analysed to assess its suitability in relation to domestic and agricultural uses. Thirty water samples, including 8 surface water (S), 22 groundwater samples [15 shallow ground waters (SW) and 7 deep ground waters (DW)], were collected to assess the various physico-chemical parameters such as Temperature, pH, Electrical conductivity (EC), Total dissolved solids (TDS), cations (Ca, Mg, Na, K), anions (CO3, HCO3, Cl, SO4, NO3, PO4) and trace elements (Fe, Mn, Zn). Various irrigation water quality diagrams and parameters such as United states salinity laboratory (USSL), Wilcox, sodium absorption ratio (SAR), sodium percentage (Na %), Residual sodium carbonate (RSC), Residual Sodium Bicarbonate (RSBC) and Kelley's ratio revealed that most of the water samples are suitable for irrigation. Langelier Saturation Index (LSI) values suggest that the water is slightly corrosive and non-scale forming in nature. Gibbs plot suggests that the study area is dominated by evaporation and rock-water dominance process. Piper plot indicates the chemical composition of water, chiefly controlled by dissolution and mixing of irrigation return flow.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Radiation effects on interface reactions of U/Fe, U/(Fe+Cr), and U/(Fe+Cr+Ni)
Shao, Lin; Chen, Di; Wei, Chaochen; ...
2014-10-01
We study the effects of radiation damage on interdiffusion and intermetallic phase formation at the interfaces of U/Fe, U/(Fe + Cr), and U/(Fe + Cr + Ni) diffusion couples. Magnetron sputtering is used to deposit thin films of Fe, Fe + Cr, or Fe + Cr + Ni on U substrates to form the diffusion couples. One set of samples are thermally annealed under high vacuum at 450 C or 550 C for one hour. A second set of samples are annealed identically but with concurrent 3.5 MeV Fe++ ion irradiation. The Fe++ ion penetration depth is sufficient to reachmore » the original interfaces. Rutherford backscattering spectrometry analysis with high fidelity spectral simulations is used to obtain interdiffusion profiles, which are used to examine differences in U diffusion and intermetallic phase formation at the buried interfaces. For all three diffusion systems, Fe++ ion irradiations enhance U diffusion. Furthermore, the irradiations accelerate the formation of intermetallic phases. In U/Fe couples, for example, the unirradiated samples show typical interdiffusion governed by Fick’s laws, while the irradiated ones show step-like profiles influenced by Gibbs phase rules.« less
NASA Astrophysics Data System (ADS)
Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.
2016-11-01
We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.
Agricultural Spray Drift Concentrations in Rainwater, Stemflow ...
In order to study spray drift contribution to non-targeted habitats, pesticide concentrations were measured in stemflow (water flowing down the trunk of a tree during a rain event), rainfall, and amphibians in an agriculturally impacted wetland area near Tifton, Georgia, USA. Agricultural fields and sampling locations were located on the University of Georgia's Gibbs research farm. Samples were analyzed for >150 pesticides and over 20 different pesticides were detected in these matrices. Data indicated that herbicides (metolachlor and atrazine) and fungicides (tebuconazole) were present with the highest concentrations in stemflow, followed by those in rainfall and amphibian tissue samples. Metolachlor had the highest frequency of detection and highest concentration in rainfall and stemflow samples. Higher concentrations of pesticides were observed in stemflow for a longer period than rainfall. Furthermore, rainfall and stemflow concentrations were compared against aquatic life benchmarks and environmental water screening values to determine if adverse effects would potentially occur for non-targeted organisms. Of the pesticides detected, several had concentrations that exceeded the aquatic life benchmark value. The majority of the time mixtures were present in the different matrices, making it difficult to determine the potential adverse effects that these compounds will have on non-target species, due to unknown potentiating effects. These data help assess the
Oxidation and mobilization of selenium by nitrate in irrigation drainage
Wright, W.G.
1999-01-01
Selenium (Se) can be oxidized by nitrate (NO3-) from irrigation on Cretaceous marine shale in western Colorado. Dissolved Se concentrations are positively correlated with dissolved NO3- concentrations in surface water and ground water samples from irrigated areas. Redox conditions dominate in the mobilization of Se in marine shale hydrogeologic settings; dissolved Se concentrations increase with increasing platinum-electrode potentials. Theoretical calculations for the oxidation of Se by NO3- and oxygen show favorable Gibbs free energies for the oxidation of Se by NO3-, indicating NO3- can act as an electron acceptor for the oxidation of Se. Laboratory batch experiments were performed by adding Mancos Shale samples to zero- dissolved-oxygen water containing 0, 5, 50, and 100 mg/L NO3- as N (mg N/L). Samples were incubated in airtight bottles at 25??C for 188 d; samples collected from the batch experiment bottles show increased Se concentrations over time with increased NO3- concentrations. Pseudo first-order rate constants for NO3- oxidation of Se ranged from 0.0007 to 0.0048/d for 0 to 100 mg N/L NO3- concentrations, respectively. Management of N fertilizer applications in Cretaceous shale settings might help to control the oxidation and mobilization of Se and other trace constituents into the environment.
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
Active Learning Using Hint Information.
Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien
2015-08-01
The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
Symmetry compression method for discovering network motifs.
Wang, Jianxin; Huang, Yuannan; Wu, Fang-Xiang; Pan, Yi
2012-01-01
Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
Van Herpe, Tom; De Brabanter, Jos; Beullens, Martine; De Moor, Bart; Van den Berghe, Greet
2008-01-01
Introduction Blood glucose (BG) control performed by intensive care unit (ICU) nurses is becoming standard practice for critically ill patients. New (semi-automated) 'BG control' algorithms (or 'insulin titration' algorithms) are under development, but these require stringent validation before they can replace the currently used algorithms. Existing methods for objectively comparing different insulin titration algorithms show weaknesses. In the current study, a new approach for appropriately assessing the adequacy of different algorithms is proposed. Methods Two ICU patient populations (with different baseline characteristics) were studied, both treated with a similar 'nurse-driven' insulin titration algorithm targeting BG levels of 80 to 110 mg/dl. A new method for objectively evaluating BG deviations from normoglycemia was founded on a smooth penalty function. Next, the performance of this new evaluation tool was compared with the current standard assessment methods, on an individual as well as a population basis. Finally, the impact of four selected parameters (the average BG sampling frequency, the duration of algorithm application, the severity of disease, and the type of illness) on the performance of an insulin titration algorithm was determined by multiple regression analysis. Results The glycemic penalty index (GPI) was proposed as a tool for assessing the overall glycemic control behavior in ICU patients. The GPI of a patient is the average of all penalties that are individually assigned to each measured BG value based on the optimized smooth penalty function. The computation of this index returns a number between 0 (no penalty) and 100 (the highest penalty). For some patients, the assessment of the BG control behavior using the traditional standard evaluation methods was different from the evaluation with GPI. Two parameters were found to have a significant impact on GPI: the BG sampling frequency and the duration of algorithm application. A higher BG sampling frequency and a longer algorithm application duration resulted in an apparently better performance, as indicated by a lower GPI. Conclusion The GPI is an alternative method for evaluating the performance of BG control algorithms. The blood glucose sampling frequency and the duration of algorithm application should be similar when comparing algorithms. PMID:18302732
Lu, Qing; Kim, Jaegil; Straub, John E
2013-03-14
The generalized Replica Exchange Method (gREM) is extended into the isobaric-isothermal ensemble, and applied to simulate a vapor-liquid phase transition in Lennard-Jones fluids. Merging an optimally designed generalized ensemble sampling with replica exchange, gREM is particularly well suited for the effective simulation of first-order phase transitions characterized by "backbending" in the statistical temperature. While the metastable and unstable states in the vicinity of the first-order phase transition are masked by the enthalpy gap in temperature replica exchange method simulations, they are transformed into stable states through the parameterized effective sampling weights in gREM simulations, and join vapor and liquid phases with a succession of unimodal enthalpy distributions. The enhanced sampling across metastable and unstable states is achieved without the need to identify a "good" order parameter for biased sampling. We performed gREM simulations at various pressures below and near the critical pressure to examine the change in behavior of the vapor-liquid phase transition at different pressures. We observed a crossover from the first-order phase transition at low pressure, characterized by the backbending in the statistical temperature and the "kink" in the Gibbs free energy, to a continuous second-order phase transition near the critical pressure. The controlling mechanisms of nucleation and continuous phase transition are evident and the coexistence properties and phase diagram are found in agreement with literature results.
Cooperative strings and glassy interfaces
Salez, Thomas; Salez, Justin; Dalnoki-Veress, Kari; Raphaël, Elie; Forrest, James A.
2015-01-01
We introduce a minimal theory of glass formation based on the ideas of molecular crowding and resultant string-like cooperative rearrangement, and address the effects of free interfaces. In the bulk case, we obtain a scaling expression for the number of particles taking part in cooperative strings, and we recover the Adam–Gibbs description of glassy dynamics. Then, by including thermal dilatation, the Vogel–Fulcher–Tammann relation is derived. Moreover, the random and string-like characters of the cooperative rearrangement allow us to predict a temperature-dependent expression for the cooperative length ξ of bulk relaxation. Finally, we explore the influence of sample boundaries when the system size becomes comparable to ξ. The theory is in agreement with measurements of the glass-transition temperature of thin polymer films, and allows quantification of the temperature-dependent thickness hm of the interfacial mobile layer. PMID:26100908
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation
NASA Astrophysics Data System (ADS)
Li, Yue-e.; Wang, Qiang
2011-11-01
This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
A method for feature selection of APT samples based on entropy
NASA Astrophysics Data System (ADS)
Du, Zhenyu; Li, Yihong; Hu, Jinsong
2018-05-01
By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
Computational Thermodynamics of Materials Zi-Kui Liu and Yi Wang
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devanathan, Ram
This authoritative volume introduces the reader to computational thermodynamics and the use of this approach to the design of material properties by tailoring the chemical composition. The text covers applications of this approach, introduces the relevant computational codes, and offers exercises at the end of each chapter. The book has nine chapters and two appendices that provide background material on computer codes. Chapter 1 covers the first and second laws of thermodynamics, introduces the spinodal as the limit of stability, and presents the Gibbs-Duhem equation. Chapter 2 focuses on the Gibbs energy function. Starting with a homogeneous system with amore » single phase, the authors proceed to phases with variable compositions, and polymer blends. The discussion includes the contributions of external electric and magnetic fields to the Gibbs energy. Chapter 3 deals with phase equilibria in heterogeneous systems, the Gibbs phase rule, and phase diagrams. Chapter 4 briefly covers experimental measurements of thermodynamic properties used as input for thermodynamic modeling by Calculation of Phase Diagrams (CALPHAD). Chapter 5 discusses the use of density functional theory to obtain thermochemical data and fill gaps where experimental data is missing. The reader is introduced to the Vienna Ab Initio Simulation Package (VASP) for density functional theory and the YPHON code for phonon calculations. Chapter 6 introduces the modeling of Gibbs energy of phases with the CALPHAD method. Chapter 7 deals with chemical reactions and the Ellingham diagram for metal-oxide systems and presents the calculation of the maximum reaction rate from equilibrium thermodynamics. Chapter 8 is devoted to electrochemical reactions and Pourbaix diagrams with application examples. Chapter 9 concludes this volume with the application of a model of multiple microstates to Ce and Fe3Pt. CALPHAD modeling is briefly discussed in the context of genomics of materials. The book introduces basic thermodynamic concepts clearly and directs readers to appropriate references for advanced concepts and details of software implementation. The list of references is quite comprehensive. The authors make liberal use of diagrams to illustrate key concepts. The two Appendices at the end discuss software requirements and the file structure, and present templates for special quasi-random structures. There is also a link to download pre-compiled binary files of the YPHON code for Linux or Microsoft Windows systems. The exercises at the end of the chapters assume that the reader has access to VASP, which is not freeware. Readers without access to this code can work on a limited number of exercises. However, results from other first principles codes can be organized in the YPHON format as explained in the Appendix. This book will serve as an excellent reference on computational thermodynamics and the exercises provided at the end of each chapter make it valuable as a graduate level textbook. Reviewer: Ram Devanathan is Acting Director of Earth Systems Science Division, Pacific Northwest National Laboratory, USA.« less
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
Xiong, Zheng; He, Yinyan; Hattrick-Simpers, Jason R; Hu, Jianjun
2017-03-13
The creation of composition-processing-structure relationships currently represents a key bottleneck for data analysis for high-throughput experimental (HTE) material studies. Here we propose an automated phase diagram attribution algorithm for HTE data analysis that uses a graph-based segmentation algorithm and Delaunay tessellation to create a crystal phase diagram from high throughput libraries of X-ray diffraction (XRD) patterns. We also propose the sample-pair based objective evaluation measures for the phase diagram prediction problem. Our approach was validated using 278 diffraction patterns from a Fe-Ga-Pd composition spread sample with a prediction precision of 0.934 and a Matthews Correlation Coefficient score of 0.823. The algorithm was then applied to the open Ni-Mn-Al thin-film composition spread sample to obtain the first predicted phase diagram mapping for that sample.
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
An Improved Mnemonic Diagram for Thermodynamic Relationships.
ERIC Educational Resources Information Center
Rodriguez, Joaquin; Brainard, Alan J.
1989-01-01
Considers pressure, volume, entropy, temperature, Helmholtz free energy, Gibbs free energy, enthalpy, and internal energy. Suggests the mnemonic diagram is for use with simple systems that are defined as macroscopically homogeneous, isotropic, uncharged, and chemically inert. (MVL)
A Mechanical Analogue for Chemical Potential, Extent of Reaction, and the Gibbs Energy.
ERIC Educational Resources Information Center
Glass, Samuel V.; DeKock, Roger L.
1998-01-01
Presents an analogy that relates the one-dimensional mechanical equilibrium of a rigid block between two Hooke's law springs and the chemical equilibrium of two perfect gases using ordinary materials. (PVD)
The African Women's Protocol: bringing attention to reproductive rights and the MDGs.
Gerntholtz, Liesl; Gibbs, Andrew; Willan, Samantha
2011-04-01
Andrew Gibbs and colleagues discuss the African Women's Protocol, a framework for ensuring reproductive rights are supported throughout the continent and for supporting interventions to improve women's reproductive health, including the MDGs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihaescu, Tatiana, E-mail: mihaescu92tatiana@gmail.com; Isar, Aurelian
We describe the evolution of the quantum entanglement of an open system consisting of two bosonic modes interacting with a common thermal environment, described by two different models. The initial state of the system is taken of Gaussian form. In the case of a thermal bath, characterized by temperature and dissipation constant which correspond to an asymptotic Gibbs state of the system, we show that for a zero temperature of the thermal bath an initial entangled Gaussian state remains entangled for all finite times. For an entangled initial squeezed thermal state, the phenomenon of entanglement sudden death takes place andmore » we calculate the survival time of entanglement. For the second model of the environment, corresponding to a non-Gibbs asymptotic state, we study the possibility of generating entanglement. We show that the generation of the entanglement between two uncoupled bosonic modes is possible only for definite values of the temperature and dissipation constant, which characterize the thermal environment.« less
eQuilibrator--the biochemical thermodynamics calculator.
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.
AC-67/FLTSATCOM Launch with Isolated Cam Views/ Freeze of Lightning/ Press Conference
NASA Technical Reports Server (NTRS)
1987-01-01
The FLTSATCOM system provides worldwide, high-priority UHF communications between naval aircraft, ships, submarines, and ground stations and between the Strategic Air Command and the national command authority network. This videotape shows the attempted launch of the 6th member of the satellite system on an Atlas Centaur rocket. Within a minute of launch a problem developed. The initial sign of the problem was the loss of telemetry data. The videotape shows three isolated views of the launch, and then a freeze shot of a lightning strike shortly after the launch. The tape then shows a press conference, with Mr. Wolmaster, Mr. Gibbs, and Air Force Colonel Alsbrooke. Mr. Gibbs summarizes the steps that would be taken to review the launch failure. The questions from the press mostly concern the weather conditions, and the possibility that the weather might have caused the mission failure.
eQuilibrator—the biochemical thermodynamics calculator
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like ‘how much Gibbs energy is released by ATP hydrolysis at pH 5?’ are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use. PMID:22064852
A possible four-phase coexistence in a single-component system
NASA Astrophysics Data System (ADS)
Akahane, Kenji; Russo, John; Tanaka, Hajime
2016-08-01
For different phases to coexist in equilibrium at constant temperature T and pressure P, the condition of equal chemical potential μ must be satisfied. This condition dictates that, for a single-component system, the maximum number of phases that can coexist is three. Historically this is known as the Gibbs phase rule, and is one of the oldest and venerable rules of thermodynamics. Here we make use of the fact that, by varying model parameters, the Gibbs phase rule can be generalized so that four phases can coexist even in single-component systems. To systematically search for the quadruple point, we use a monoatomic system interacting with a Stillinger-Weber potential with variable tetrahedrality. Our study indicates that the quadruple point provides flexibility in controlling multiple equilibrium phases and may be realized in systems with tunable interactions, which are nowadays feasible in several soft matter systems such as patchy colloids.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.
Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I
2007-12-01
A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.
The Gibbs paradox and the physical criteria for indistinguishability of identical particles
NASA Astrophysics Data System (ADS)
Unnikrishnan, C. S.
2016-08-01
Gibbs paradox in the context of statistical mechanics addresses the issue of additivity of entropy of mixing gases. The usual discussion attributes the paradoxical situation to classical distinguishability of identical particles and credits quantum theory for enabling indistinguishability of identical particles to solve the problem. We argue that indistinguishability of identical particles is already a feature in classical mechanics and this is clearly brought out when the problem is treated in the language of information and associated entropy. We pinpoint the physical criteria for indistinguishability that is crucial for the treatment of the Gibbs’ problem and the consistency of its solution with conventional thermodynamics. Quantum mechanics provides a quantitative criterion, not possible in the classical picture, for the degree of indistinguishability in terms of visibility of quantum interference, or overlap of the states as pointed out by von Neumann, thereby endowing the entropy expression with mathematical continuity and physical reasonableness.
Tightness of the Ising-Kac Model on the Two-Dimensional Torus
NASA Astrophysics Data System (ADS)
Hairer, Martin; Iberti, Massimo
2018-05-01
We consider the sequence of Gibbs measures of Ising models with Kac interaction defined on a periodic two-dimensional discrete torus near criticality. Using the convergence of the Glauber dynamic proven by Mourrat and Weber (Commun Pure Appl Math 70:717-812, 2017) and a method by Tsatsoulis and Weber employed in (arXiv:1609.08447 2016), we show tightness for the sequence of Gibbs measures of the Ising-Kac model near criticality and characterise the law of the limit as the Φ ^4_2 measure on the torus. Our result is very similar to the one obtained by Cassandro et al. (J Stat Phys 78(3):1131-1138, 1995) on Z^2, but our strategy takes advantage of the dynamic, instead of correlation inequalities. In particular, our result covers the whole critical regime and does not require the large temperature/large mass/small coupling assumption present in earlier results.
Robie, R.A.; Wiggins, L.B.; Barton, P.B.; Hemingway, B.S.
1985-01-01
The heat capacity of CuFeS2 (chalcopyrite) was measured between 6.3 and 303.5 K. At 298.15 K, Cp,mo and Smo(T) are (95.67??0.14) J??K-1??mol-1 and (124.9??0.2) J??K-1??mol-1, respectively. From a consideration of the results of two sets of equilibrium measurements we conclude that ??fHmo(CuFeS2, cr, 298.15 K) = -(193.6??1.6) kJ??mol-1 and that the recent bomb-calorimetric determination by Johnson and Steele (J. Chem. Thermodynamics 1981, 13, 991) is in error. The standard molar Gibbs free energy of formation of bornite (Cu5FeS4) is -(444.9??2.1) kJ??mol-1 at 748 K. ?? 1985.
NASA Technical Reports Server (NTRS)
Sudbrack, Chantal K.; Noebe, Ronald D.; Seidman, David N.
2006-01-01
For a Ni-5.2 Al-14.2 Cr at.% alloy with moderate solute supersaturations, the compositional pathways, as measured with atom-probe tomography, during early to later stage y'(LI2)-precipitation (R = 0.45-10 nm), aged at 873 K, are discussed in light of a multi-component coarsening model. Employing nondilute thermodynamics, detailed model analyses during quasistationary coarsening of the experimental data establish that the y/y' interfacial free-energy is 22- 23+/-7 mJ/sq m. Additionally, solute diffusivities are significantly slower than model estimates. Strong quantitative evidence indicates that an observed y'-supersaturation of Al results from the Gibbs-Thomson effect, providing the first experimental verification of this phenomenon. The Gibbs-Thomson relationship, for a ternary system, as well as differences in measured phase equilibria with CALPHAD assessments, are considered in great detail.
Quenching the XXZ spin chain: quench action approach versus generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Mestyán, M.; Pozsgay, B.; Takács, G.; Werner, M. A.
2015-04-01
Following our previous work (Pozsgay et al 2014 Phys. Rev. Lett. 113 117203) we present here a detailed comparison of the quench action approach and the predictions of the generalized Gibbs ensemble, with the result that while the quench action formalism correctly captures the steady state, the GGE does not give a correct description of local short-distance correlation functions. We extend our studies to include another initial state, the so-called q-dimer state. We present important details of our construction, including new results concerning exact overlaps for the dimer and q-dimer states, and we also give an exact solution of the quench-action-based overlap-TBA for the q-dimer. Furthermore, we extend our computations to include the xx spin correlations besides the zz correlations treated previously, and give a detailed discussion of the underlying reasons for the failure of the GGE, especially in the light of new developments.
Hemingway, Bruch S.; Seal, Robert R.; Chou, I-Ming
2002-01-01
Enthalpy of formation, Gibbs energy of formation, and entropy values have been compiled from the literature for the hydrated ferrous sulfate minerals melanterite, rozenite, and szomolnokite, and a variety of other hydrated sulfate compounds. On the basis of this compilation, it appears that there is no evidence for an excess enthalpy of mixing for sulfate-H2O systems, except for the first H2O molecule of crystallization. The enthalpy and Gibbs energy of formation of each H2O molecule of crystallization, except the first, in the iron(II) sulfate - H2O system is -295.15 and -238.0 kJ?mol-1, respectively. The absence of an excess enthalpy of mixing is used as the basis for estimating thermodynamic values for a variety of ferrous, ferric, and mixed-valence sulfate salts of relevance to acid-mine drainage systems.
Naumov, Sergej; von Sonntag, Clemens
2011-11-01
Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.
A possible four-phase coexistence in a single-component system
Akahane, Kenji; Russo, John; Tanaka, Hajime
2016-01-01
For different phases to coexist in equilibrium at constant temperature T and pressure P, the condition of equal chemical potential μ must be satisfied. This condition dictates that, for a single-component system, the maximum number of phases that can coexist is three. Historically this is known as the Gibbs phase rule, and is one of the oldest and venerable rules of thermodynamics. Here we make use of the fact that, by varying model parameters, the Gibbs phase rule can be generalized so that four phases can coexist even in single-component systems. To systematically search for the quadruple point, we use a monoatomic system interacting with a Stillinger–Weber potential with variable tetrahedrality. Our study indicates that the quadruple point provides flexibility in controlling multiple equilibrium phases and may be realized in systems with tunable interactions, which are nowadays feasible in several soft matter systems such as patchy colloids. PMID:27558452
Tian; Holt; Apfel
1997-03-01
The experimental results of droplet shape oscillations are reported and applied to the analysis of surface rheological properties of surfactant solutions. An acoustic levitation technique is used to suspend the test drop in air and excite it into quadrupole shape oscillations. The equilibrium surface tension, Gibbs elasticity, and surface dilatational viscosity are determined from the measurements of droplet static shape under different levitation sound pressure, oscillation frequency, and free damping constant. Aqueous solutions of sodium dodecyl sulfate, dodecyltrimethylammonium bromide, and n-octyl beta-d-glucopyranoside are tested with this system. The concentrations of the solutions are below the critical micelle concentration. For these solutions it is found that the surface Gibbs elasticity approaches a maximum at a moderate concentration, and its value is less than that directly calculated from the state equation of a static liquid surface. The surface dilatational viscosity is found to be in a range around 0.1 cps.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
[Purity Detection Model Update of Maize Seeds Based on Active Learning].
Tang, Jin-ya; Huang, Min; Zhu, Qi-bing
2015-08-01
Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets
Wernisch, Lorenz
2017-01-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190
Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.
Gabasova, Evelina; Reid, John; Wernisch, Lorenz
2017-10-01
Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
A time series model: First-order integer-valued autoregressive (INAR(1))
NASA Astrophysics Data System (ADS)
Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.
2017-07-01
Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.
NASA Astrophysics Data System (ADS)
Narayana, C.; Greene, R. G.; Ruoff, A. L.
2008-07-01
Raman and x-ray diffraction studies were made on silane in the diamond anvil cell using three different gaskets, stainless steel, tungsten and rhenium. The structure existing between 10 and 27 GPa is well characterized by the monoclinic space group P21c (#14). While the Gibbs free energy of formation of silane is positive at one atmosphere, it is calculated from the equation of state of silane and its reactants that this becomes negative near 4 GPa and remains negative until 13 GPa and then becomes positive again. At about 27 GPa, where quasi-quantum mechanical calculations suggest there should be a transformation from 4-fold to 6-fold (or even higher), the sample turns black. The Raman modes seize to exist beyond 30 GPa after showing softening above 25 GPa. At higher pressures it turns silvery. The gaskets play a different role as will be discussed. The sample brought back from 70 GPa contains amorphous Si (with attached hydrogen) as well as crystalline silicon. The lowest free energy system at high pressure is the decomposed reactants as observed.
Caliskan, Necla; Kul, Ali Riza; Alkan, Salih; Sogut, Eda Gokirmak; Alacabey, Ihsan
2011-10-15
The removal of Zn(II) ions from aqueous solution was studied using natural and MnO(2) modified diatomite samples at different temperatures. The linear Langmuir, Freundlich and Dubinin-Radushkevich (D-R) adsorption equations were applied to describe the equilibrium isotherms. From the D-R model, the mean adsorption energy was calculated as >8 kJ mol(-1), indicating that the adsorption of Zn(II) onto diatomite and Mn-diatomite was physically carried out. In addition, the pseudo-first-order, pseudo-second-order and intraparticle diffusion models were used to determine the kinetic data. The experimental data were well fitted by the pseudo-second-order kinetic model. Thermodynamic parameters such as the enthalpy (ΔH(0)), Gibbs' free energy (ΔG(0)) and entropy (ΔS(0)) were calculated for natural and MnO(2) modified diatomite. These values showed that the adsorption of Zn(II) ions onto diatomite samples was controlled by a physical mechanism and occurred spontaneously. Copyright © 2011 Elsevier B.V. All rights reserved.
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
NASA Astrophysics Data System (ADS)
Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco
2018-02-01
The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.
NASA Astrophysics Data System (ADS)
Corsi, A.; Gujrati, P. D.
2000-03-01
The Flory model of crystallization of polymers is well known and forms the cornerstone of the Gibbs-DiMarzio theory of glass transition. The model has no known exact solution and the original calculation [1] was shown to be incorrect [2]. Still it is interesting to know the order of the phase transition, if it has one. We have studied the thermodynamics of the model in the limit of infinite molecular weight. We have solved it exactly on a recursive lattice with coordination number q=4, relevant for a tetrahedral lattice. Our results show that there is a continuous, i.e. a second-order, transition at which the entropy of the system is continuous. It is finite at all temperatures and approaches 0 as T goes to 0 so that the system is never completely ordered except at T=0. As the temperature is raised above T=0 the system begins to disorder with a degree of disorder that depends on T, which is in accordance with the analysis of Gujrati and Goldstein [2]. Since there is no first order transition there is no Kauzmann paradox. Similarly there is no possible metastable extension in the model which is central to the Gibbs-DiMarzio conjecture for an ideal glass transition. Thus, our solution does not justify their conjecture. [1] P.J. Flory, Proc. R. Soc. London Ser., A234, 60 (1956) [2] P.D. Gujrati, J. Phys. A: Math. Gen., 13, L437 (1980), P.D. Gujrati, M. Goldstein, J. Chem. Phys., 74(4), 2596 (1981)
Recommendations for terminology and databases for biochemical thermodynamics.
Alberty, Robert A; Cornish-Bowden, Athel; Goldberg, Robert N; Hammes, Gordon G; Tipton, Keith; Westerhoff, Hans V
2011-05-01
Chemical equations are normally written in terms of specific ionic and elemental species and balance atoms of elements and electric charge. However, in a biochemical context it is usually better to write them with ionic reactants expressed as totals of species in equilibrium with each other. This implies that atoms of elements assumed to be at fixed concentrations, such as hydrogen at a specified pH, should not be balanced in a biochemical equation used for thermodynamic analysis. However, both kinds of equations are needed in biochemistry. The apparent equilibrium constant K' for a biochemical reaction is written in terms of such sums of species and can be used to calculate standard transformed Gibbs energies of reaction Δ(r)G'°. This property for a biochemical reaction can be calculated from the standard transformed Gibbs energies of formation Δ(f)G(i)'° of reactants, which can be calculated from the standard Gibbs energies of formation of species Δ(f)G(j)° and measured apparent equilibrium constants of enzyme-catalyzed reactions. Tables of Δ(r)G'° of reactions and Δ(f)G(i)'° of reactants as functions of pH and temperature are available on the web, as are functions for calculating these properties. Biochemical thermodynamics is also important in enzyme kinetics because apparent equilibrium constant K' can be calculated from experimentally determined kinetic parameters when initial velocities have been determined for both forward and reverse reactions. Specific recommendations are made for reporting experimental results in the literature. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Huanfeng; Huang, Zaiyin; Xiao, Ming; Liang, Min; Chen, Liying; Tan, XueCai
2017-09-01
The activities, selectivities, and stabilities of nanoparticles in heterogeneous reactions are size-dependent. In order to investigate the influencing laws of particle size and temperature on kinetic parameters in heterogeneous reactions, cubic nano-Cu2O particles of four different sizes in the range of 40-120 nm have been controllably synthesized. In situ microcalorimetry has been used to attain thermodynamic data on the reaction of Cu2O with aqueous HNO3 and, combined with thermodynamic principles and kinetic transition-state theory, the relevant reaction kinetic parameters have been evaluated. The size dependences of the kinetic parameters are discussed in terms of the established kinetic model and the experimental results. It was found that the reaction rate constants increased with decreasing particle size. Accordingly, the apparent activation energy, pre-exponential factor, activation enthalpy, activation entropy, and activation Gibbs energy decreased with decreasing particle size. The reaction rate constants and activation Gibbs energies increased with increasing temperature. Moreover, the logarithms of the apparent activation energies, pre-exponential factors, and rate constants were found to be linearly related to the reciprocal of particle size, consistent with the kinetic models. The influence of particle size on these reaction kinetic parameters may be explained as follows: the apparent activation energy is affected by the partial molar enthalpy, the pre-exponential factor is affected by the partial molar entropy, and the reaction rate constant is affected by the partial molar Gibbs energy. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Bashir, Erum; Huda, Syed Nawaz-ul; Naseem, Shahid; Hamza, Salma; Kaleem, Maria
2017-07-01
Thirty-nine (23 dug and 16 tube well) samples were geochemically evaluated and investigated to ascertain the quality of water in Khipro, Sindh. The analytical results exhibited abundance of major cations and anions in Na+ > Ca2+ > Mg2+ > K+ and Cl- > HCO3 - > SO4 2- sequence. Stiff diagram showed dug well sample have high Na-Cl and moderate Mg-SO4 content as compared to tube well samples. Majority of dug well samples appeared as Na-Cl type on Piper diagram while tube well samples are mixed type. Gibbs diagram reflected evaporation as a dominant phenomenon in dug well; however, tube well samples are declined toward rock dominance. Process of ion exchange was witnessed from Na+ versus Cl- and Ca2+ + Mg2+ versus HCO3 - + SO4 2- plots. Principal component analysis also discriminates dug well and tube well water by means of positive and negative loading based on physical and chemical composition of the groundwater. Studied and computed parameters like pH, EC, TDS, TH, Na+, K+, Ca2+, Mg2+, Cl-, SO4 2-, HCO3 -, sodium adsorption ratio, magnesium adsorption ratio, potential salinity, residual sodium carbonate, Na%, Kelly's ratio, and permeability index were compared with WHO to evaluate studied water for drinking and agricultural purposes. Except Na+ and K+, all chemical constrains are within the allowed limits, set by WHO for drinking water. Similarly, most of the groundwater is moderately suitable for irrigation uses, with few exceptions.
New method for detection of gastric cancer by hyperspectral imaging: a pilot study
NASA Astrophysics Data System (ADS)
Kiyotoki, Shu; Nishikawa, Jun; Okamoto, Takeshi; Hamabe, Kouichi; Saito, Mari; Goto, Atsushi; Fujita, Yusuke; Hamamoto, Yoshihiko; Takeuchi, Yusuke; Satori, Shin; Sakaida, Isao
2013-02-01
We developed a new, easy, and objective method to detect gastric cancer using hyperspectral imaging (HSI) technology combining spectroscopy and imaging A total of 16 gastroduodenal tumors removed by endoscopic resection or surgery from 14 patients at Yamaguchi University Hospital, Japan, were recorded using a hyperspectral camera (HSC) equipped with HSI technology Corrected spectral reflectance was obtained from 10 samples of normal mucosa and 10 samples of tumors for each case The 16 cases were divided into eight training cases (160 training samples) and eight test cases (160 test samples) We established a diagnostic algorithm with training samples and evaluated it with test samples Diagnostic capability of the algorithm for each tumor was validated, and enhancement of tumors by image processing using the HSC was evaluated The diagnostic algorithm used the 726-nm wavelength, with a cutoff point established from training samples The sensitivity, specificity, and accuracy rates of the algorithm's diagnostic capability in the test samples were 78.8% (63/80), 92.5% (74/80), and 85.6% (137/160), respectively Tumors in HSC images of 13 (81.3%) cases were well enhanced by image processing Differences in spectral reflectance between tumors and normal mucosa suggested that tumors can be clearly distinguished from background mucosa with HSI technology.
Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.
Li, Liang; Wang, Bigong; Wang, Ge
2016-01-01
In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.
Hedgehogs and foxes (and a bear)
NASA Astrophysics Data System (ADS)
Gibb, Bruce
2017-02-01
The chemical universe is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Bruce Gibb reminds us that it's somewhat messy too, and so we succeed by recognizing the limits of our knowledge.
NASA Astrophysics Data System (ADS)
Gibb, Bruce C.
2015-11-01
Carl Wilhelm Scheele had a hand in the discovery of at least six elements and contributed to the early development of chemistry in numerous other ways. Bruce Gibb looks into Scheele's story and considers why he doesn't get the credit that he deserves.