Sample records for dirichlet process mixture

  1. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  2. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  3. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  4. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  5. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  6. Generalized species sampling priors with latent Beta reinforcements

    PubMed Central

    Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele

    2014-01-01

    Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462

  7. Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation

    DTIC Science & Technology

    2017-03-01

    Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural

  8. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  9. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  10. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    DTIC Science & Technology

    2014-06-27

    breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential

  11. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  12. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  13. DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.

    PubMed

    Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei

    2018-01-01

    Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  15. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  16. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  17. Multimodal brain-tumor segmentation based on Dirichlet process mixture model with anisotropic diffusion and Markov random field prior.

    PubMed

    Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.

  18. Multimodal Brain-Tumor Segmentation Based on Dirichlet Process Mixture Model with Anisotropic Diffusion and Markov Random Field Prior

    PubMed Central

    Lu, Yisu; Jiang, Jun; Chen, Wufan

    2014-01-01

    Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064

  19. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    PubMed

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  1. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  2. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  3. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  4. Feature extraction for document text using Latent Dirichlet Allocation

    NASA Astrophysics Data System (ADS)

    Prihatini, P. M.; Suryawan, I. K.; Mandia, IN

    2018-01-01

    Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.

  5. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  6. A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.

    PubMed

    Gao, Xiang; Lin, Huaiying; Dong, Qunfeng

    2017-01-01

    Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.

  7. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  8. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  9. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  10. Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling

    PubMed Central

    F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando

    2016-01-01

    This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155

  11. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  12. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  13. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions.

    PubMed

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-05-01

    The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.

  14. A Dirichlet process mixture model for automatic {sup 18}F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less

  15. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.

    PubMed

    Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A

    2017-12-01

    Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.

  17. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  18. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  19. Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior

    USGS Publications Warehouse

    Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.

    2008-01-01

    In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.

  20. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  1. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  2. Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data

    PubMed Central

    Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.

    2016-01-01

    We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872

  3. Quantum "violation" of Dirichlet boundary condition

    NASA Astrophysics Data System (ADS)

    Park, I. Y.

    2017-02-01

    Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the 'violation' of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.

  4. USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS

    EPA Science Inventory

    Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...

  5. Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps

    NASA Astrophysics Data System (ADS)

    Yi, Taishan; Chen, Yuming

    2017-12-01

    In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.

  6. Hierarchical Dirichlet process model for gene expression clustering

    PubMed Central

    2013-01-01

    Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments. PMID:23587447

  7. Condition Monitoring for Helicopter Data. Appendix A

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2000-01-01

    In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.

  8. Fungi diversity from different depths and times in chicken manure waste static aerobic composting.

    PubMed

    Gu, Wenjie; Lu, Yusheng; Tan, Zhiyuan; Xu, Peizhi; Xie, Kaizhi; Li, Xia; Sun, Lili

    2017-09-01

    The Dirichlet multinomial mixtures mode was used to analyse illumina sequencing data to reveal both temporal and spatial variations of the fungi community present in the aerobic composting. Results showed that 670 operational taxonomic units (OTUs) were detected, and the dominant phylum was Ascomycota. There were four types of samples fungi communities during the composting process. Samples from the early composting stage were mainly grouped into type I and Saccharomycetales sp. was dominant. Fungi community in the medium composting stage were fallen into type II and III, Sordariales sp. and Acremonium alcalophilum, Saccharomycetales sp. and Scedosporium minutisporum were the dominant OTUs respectively. Samples from the late composting stage were mainly grouped into type IV and Scedosporium minutisporum was the dominant OTU; Scedosporium minutisporum was significantly affected by depth (P<0.05). Results indicate that time and depth both are factors that influence fungi distribution and variation in c waste during static aerobic composting. Copyright © 2017. Published by Elsevier Ltd.

  9. Spectral multigrid methods for elliptic equations 2

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.

    1983-01-01

    A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.

  10. Quantum Gravitational Effects on the Boundary

    NASA Astrophysics Data System (ADS)

    James, F.; Park, I. Y.

    2018-04-01

    Quantum gravitational effects might hold the key to some of the outstanding problems in theoretical physics. We analyze the perturbative quantum effects on the boundary of a gravitational system and the Dirichlet boundary condition imposed at the classical level. Our analysis reveals that for a black hole solution, there is a contradiction between the quantum effects and the Dirichlet boundary condition: the black hole solution of the one-particle-irreducible action no longer satisfies the Dirichlet boundary condition as would be expected without going into details. The analysis also suggests that the tension between the Dirichlet boundary condition and loop effects is connected with a certain mechanism of information storage on the boundary.

  11. A Semiparametric Approach to Simultaneous Covariance Estimation for Bivariate Sparse Longitudinal Data

    PubMed Central

    Das, Kiranmoy; Daniels, Michael J.

    2014-01-01

    Summary Estimation of the covariance structure for irregular sparse longitudinal data has been studied by many authors in recent years but typically using fully parametric specifications. In addition, when data are collected from several groups over time, it is known that assuming the same or completely different covariance matrices over groups can lead to loss of efficiency and/or bias. Nonparametric approaches have been proposed for estimating the covariance matrix for regular univariate longitudinal data by sharing information across the groups under study. For the irregular case, with longitudinal measurements that are bivariate or multivariate, modeling becomes more difficult. In this article, to model bivariate sparse longitudinal data from several groups, we propose a flexible covariance structure via a novel matrix stick-breaking process for the residual covariance structure and a Dirichlet process mixture of normals for the random effects. Simulation studies are performed to investigate the effectiveness of the proposed approach over more traditional approaches. We also analyze a subset of Framingham Heart Study data to examine how the blood pressure trajectories and covariance structures differ for the patients from different BMI groups (high, medium and low) at baseline. PMID:24400941

  12. Wireless Wearable Multisensory Suite and Real-Time Prediction of Obstructive Sleep Apnea Episodes.

    PubMed

    Le, Trung Q; Cheng, Changqing; Sangasoongsong, Akkarapol; Wongdhamma, Woranat; Bukkapatnam, Satish T S

    2013-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder found in 24% of adult men and 9% of adult women. Although continuous positive airway pressure (CPAP) has emerged as a standard therapy for OSA, a majority of patients are not tolerant to this treatment, largely because of the uncomfortable nasal air delivery during their sleep. Recent advances in wireless communication and advanced ("bigdata") preditive analytics technologies offer radically new point-of-care treatment approaches for OSA episodes with unprecedented comfort and afforadability. We introduce a Dirichlet process-based mixture Gaussian process (DPMG) model to predict the onset of sleep apnea episodes based on analyzing complex cardiorespiratory signals gathered from a custom-designed wireless wearable multisensory suite. Extensive testing with signals from the multisensory suite as well as PhysioNet's OSA database suggests that the accuracy of offline OSA classification is 88%, and accuracy for predicting an OSA episode 1-min ahead is 83% and 3-min ahead is 77%. Such accurate prediction of an impending OSA episode can be used to adaptively adjust CPAP airflow (toward improving the patient's adherence) or the torso posture (e.g., minor chin adjustments to maintain steady levels of the airflow).

  13. Stability estimate for the aligned magnetic field in a periodic quantum waveguide from Dirichlet-to-Neumann map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis

    In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.

  14. Constructing Weyl group multiple Dirichlet series

    NASA Astrophysics Data System (ADS)

    Chinta, Gautam; Gunnells, Paul E.

    2010-01-01

    Let Phi be a reduced root system of rank r . A Weyl group multiple Dirichlet series for Phi is a Dirichlet series in r complex variables s_1,dots,s_r , initially converging for {Re}(s_i) sufficiently large, that has meromorphic continuation to {{C}}^r and satisfies functional equations under the transformations of {{C}}^r corresponding to the Weyl group of Phi . A heuristic definition of such a series was given by Brubaker, Bump, Chinta, Friedberg, and Hoffstein, and they have been investigated in certain special cases by others. In this paper we generalize results by Chinta and Gunnells to construct Weyl group multiple Dirichlet series by a uniform method and show in all cases that they have the expected properties.

  15. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    ERIC Educational Resources Information Center

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  16. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  17. Evolutionary dynamics of language systems

    PubMed Central

    Wu, Chieh-Hsi; Hua, Xia; Dunn, Michael; Levinson, Stephen C.; Gray, Russell D.

    2017-01-01

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact. PMID:29073028

  18. Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.

    2016-01-01

    Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.

  19. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  20. Bounded solutions in a T-shaped waveguide and the spectral properties of the Dirichlet ladder

    NASA Astrophysics Data System (ADS)

    Nazarov, S. A.

    2014-08-01

    The Dirichlet problem is considered on the junction of thin quantum waveguides (of thickness h ≪ 1) in the shape of an infinite two-dimensional ladder. Passage to the limit as h → +0 is discussed. It is shown that the asymptotically correct transmission conditions at nodes of the corresponding one-dimensional quantum graph are Dirichlet conditions rather than the conventional Kirchhoff transmission conditions. The result is obtained by analyzing bounded solutions of a problem in the T-shaped waveguide that the boundary layer phenomenon.

  1. General stability of memory-type thermoelastic Timoshenko beam acting on shear force

    NASA Astrophysics Data System (ADS)

    Apalara, Tijani A.

    2018-03-01

    In this paper, we consider a linear thermoelastic Timoshenko system with memory effects where the thermoelastic coupling is acting on shear force under Neumann-Dirichlet-Dirichlet boundary conditions. The same system with fully Dirichlet boundary conditions was considered by Messaoudi and Fareh (Nonlinear Anal TMA 74(18):6895-6906, 2011, Acta Math Sci 33(1):23-40, 2013), but they obtained a general stability result which depends on the speeds of wave propagation. In our case, we obtained a general stability result irrespective of the wave speeds of the system.

  2. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    PubMed

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.

  3. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  4. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2014-03-04

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  5. Diffusion Processes Satisfying a Conservation Law Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, J.; Ristorcelli, J. R.

    We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less

  6. Application of the perfectly matched layer in 3-D marine controlled-source electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Li, Yuguo; Han, Bo; Liu, Zhan

    2018-01-01

    In this study, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 3-D frequency-domain marine controlled-source electromagnetic (CSEM) field modelling. The Dirichlet boundary, which is usually used within the traditional framework of EM modelling algorithms, assumes that the electric or magnetic field values are zero at the boundaries. This requires the boundaries to be sufficiently far away from the area of interest. To mitigate the boundary artefacts, a large modelling area may be necessary even though cell sizes are allowed to grow toward the boundaries due to the diffusion of the electromagnetic wave propagation. Compared with the conventional Dirichlet boundary, the PML boundary is preferred as the modelling area of interest could be restricted to the target region and only a few absorbing layers surrounding can effectively depress the artificial boundary effect without losing the numerical accuracy. Furthermore, for joint inversion of seismic and marine CSEM data, if we use the PML for CSEM field simulation instead of the conventional Dirichlet, the modelling area for these two different geophysical data collected from the same survey area could be the same, which is convenient for joint inversion grid matching. We apply the CFS-PML boundary to 3-D marine CSEM modelling by using the staggered finite-difference discretization. Numerical test indicates that the modelling algorithm using the CFS-PML also shows good accuracy compared to the Dirichlet. Furthermore, the modelling algorithm using the CFS-PML shows advantages in computational time and memory saving than that using the Dirichlet boundary. For the 3-D example in this study, the memory saving using the PML is nearly 42 per cent and the time saving is around 48 per cent compared to using the Dirichlet.

  7. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  8. Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1995-11-01

    Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.

  9. Null boundary controllability of a one-dimensional heat equation with an internal point mass and variable coefficients

    NASA Astrophysics Data System (ADS)

    Ben Amara, Jamel; Bouzidi, Hedi

    2018-01-01

    In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.

  10. Polynomial decay rate of a thermoelastic Mindlin-Timoshenko plate model with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-02-01

    In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.

  11. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    PubMed

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  12. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds.

    PubMed

    Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L

    2011-06-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.

  13. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    PubMed Central

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  14. Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection

    NASA Astrophysics Data System (ADS)

    Safi’ie, M. A.; Utami, E.; Fatta, H. A.

    2018-03-01

    Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.

  15. A Bayesian Semiparametric Item Response Model with Dirichlet Process Priors

    ERIC Educational Resources Information Center

    Miyazaki, Kei; Hoshino, Takahiro

    2009-01-01

    In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models…

  16. Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites

    ERIC Educational Resources Information Center

    Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep

    2016-01-01

    Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…

  17. Nonparametric Bayesian predictive distributions for future order statistics

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    1999-01-01

    We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...

  18. Modeling virtual organizations with Latent Dirichlet Allocation: a case for natural language processing.

    PubMed

    Gross, Alexander; Murthy, Dhiraj

    2014-10-01

    This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. On the Dirichlet's Box Principle

    ERIC Educational Resources Information Center

    Poon, Kin-Keung; Shiu, Wai-Chee

    2008-01-01

    In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…

  20. Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method

    Treesearch

    Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave

    2014-01-01

    We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...

  1. Uniform gradient estimates on manifolds with a boundary and applications

    NASA Astrophysics Data System (ADS)

    Cheng, Li-Juan; Thalmaier, Anton; Thompson, James

    2018-04-01

    We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.

  2. Entity Relation Detection with Factorial Hidden Markov Models and Maximum Entropy Discriminant Latent Dirichlet Allocations

    ERIC Educational Resources Information Center

    Li, Dingcheng

    2011-01-01

    Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…

  3. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  4. Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields

    NASA Astrophysics Data System (ADS)

    Díaz-Marín, Homero G.

    We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.

  5. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  6. Repeated Red-Black ordering

    NASA Astrophysics Data System (ADS)

    Ciarlet, P.

    1994-09-01

    Hereafter, we describe and analyze, from both a theoretical and a numerical point of view, an iterative method for efficiently solving symmetric elliptic problems with possibly discontinuous coefficients. In the following, we use the Preconditioned Conjugate Gradient method to solve the symmetric positive definite linear systems which arise from the finite element discretization of the problems. We focus our interest on sparse and efficient preconditioners. In order to define the preconditioners, we perform two steps: first we reorder the unknowns and then we carry out a (modified) incomplete factorization of the original matrix. We study numerically and theoretically two preconditioners, the second preconditioner corresponding to the one investigated by Brand and Heinemann [2]. We prove convergence results about the Poisson equation with either Dirichlet or periodic boundary conditions. For a meshsizeh, Brand proved that the condition number of the preconditioned system is bounded byO(h-1/2) for Dirichlet boundary conditions. By slightly modifying the preconditioning process, we prove that the condition number is bounded byO(h-1/3).

  7. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  8. Generalized Riemann hypothesis and stochastic time series

    NASA Astrophysics Data System (ADS)

    Mussardo, Giuseppe; LeClair, André

    2018-06-01

    Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.

  9. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  10. Recommender system based on scarce information mining.

    PubMed

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Ages of Records in Random Walks

    NASA Astrophysics Data System (ADS)

    Szabó, Réka; Vető, Bálint

    2016-12-01

    We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.

  12. Laplace-Beltrami Eigenvalues and Topological Features of Eigenfunctions for Statistical Shape Analysis

    PubMed Central

    Reuter, Martin; Wolter, Franz-Erich; Shenton, Martha; Niethammer, Marc

    2009-01-01

    This paper proposes the use of the surface based Laplace-Beltrami and the volumetric Laplace eigenvalues and -functions as shape descriptors for the comparison and analysis of shapes. These spectral measures are isometry invariant and therefore allow for shape comparisons with minimal shape pre-processing. In particular, no registration, mapping, or remeshing is necessary. The discriminatory power of the 2D surface and 3D solid methods is demonstrated on a population of female caudate nuclei (a subcortical gray matter structure of the brain, involved in memory function, emotion processing, and learning) of normal control subjects and of subjects with schizotypal personality disorder. The behavior and properties of the Laplace-Beltrami eigenvalues and -functions are discussed extensively for both the Dirichlet and Neumann boundary condition showing advantages of the Neumann vs. the Dirichlet spectra in 3D. Furthermore, topological analyses employing the Morse-Smale complex (on the surfaces) and the Reeb graph (in the solids) are performed on selected eigenfunctions, yielding shape descriptors, that are capable of localizing geometric properties and detecting shape differences by indirectly registering topological features such as critical points, level sets and integral lines of the gradient field across subjects. The use of these topological features of the Laplace-Beltrami eigenfunctions in 2D and 3D for statistical shape analysis is novel. PMID:20161035

  13. Classification of iRBD and Parkinson's disease patients based on eye movements during sleep.

    PubMed

    Christensen, Julie A E; Koch, Henriette; Frandsen, Rune; Kempfner, Jacob; Arvastson, Lars; Christensen, Soren R; Sorensen, Helge B D; Jennum, Poul

    2013-01-01

    Patients suffering from the sleep disorder idiopathic rapid-eye-movement sleep behavior disorder (iRBD) have been observed to be in high risk of developing Parkinson's disease (PD). This makes it essential to analyze them in the search for PD biomarkers. This study aims at classifying patients suffering from iRBD or PD based on features reflecting eye movements (EMs) during sleep. A Latent Dirichlet Allocation (LDA) topic model was developed based on features extracted from two electrooculographic (EOG) signals measured as parts in full night polysomnographic (PSG) recordings from ten control subjects. The trained model was tested on ten other control subjects, ten iRBD patients and ten PD patients, obtaining a EM topic mixture diagram for each subject in the test dataset. Three features were extracted from the topic mixture diagrams, reflecting "certainty", "fragmentation" and "stability" in the timely distribution of the EM topics. Using a Naive Bayes (NB) classifier and the features "certainty" and "stability" yielded the best classification result and the subjects were classified with a sensitivity of 95 %, a specificity of 80% and an accuracy of 90 %. This study demonstrates in a data-driven approach, that iRBD and PD patients may exhibit abnorm form and/or timely distribution of EMs during sleep.

  14. Uniqueness for the electrostatic inverse boundary value problem with piecewise constant anisotropic conductivities

    NASA Astrophysics Data System (ADS)

    Alessandrini, Giovanni; de Hoop, Maarten V.; Gaburro, Romina

    2017-12-01

    We discuss the inverse problem of determining the, possibly anisotropic, conductivity of a body Ω\\subset{R}n when the so-called Neumann-to-Dirichlet map is locally given on a non-empty curved portion Σ of the boundary \\partialΩ . We prove that anisotropic conductivities that are a priori known to be piecewise constant matrices on a given partition of Ω with curved interfaces can be uniquely determined in the interior from the knowledge of the local Neumann-to-Dirichlet map.

  15. Quasi-measures on the group G{sup m}, Dirichlet sets, and uniqueness problems for multiple Walsh series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plotnikov, Mikhail G

    2011-02-11

    Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.

  16. Effect of background dielectric on TE-polarized photonic bandgap of metallodielectric photonic crystals using Dirichlet-to-Neumann map method.

    PubMed

    Sedghi, Aliasghar; Rezaei, Behrooz

    2016-11-20

    Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.

  17. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection.

    PubMed

    Surian, Didi; Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-08-29

    In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines.

  18. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection

    PubMed Central

    Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-01-01

    Background In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Objective Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. Methods The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. Results We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. Conclusions The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines. PMID:27573910

  19. Topic detection using paragraph vectors to support active learning in systematic reviews.

    PubMed

    Hashimoto, Kazuma; Kontonatsios, Georgios; Miwa, Makoto; Ananiadou, Sophia

    2016-08-01

    Systematic reviews require expert reviewers to manually screen thousands of citations in order to identify all relevant articles to the review. Active learning text classification is a supervised machine learning approach that has been shown to significantly reduce the manual annotation workload by semi-automating the citation screening process of systematic reviews. In this paper, we present a new topic detection method that induces an informative representation of studies, to improve the performance of the underlying active learner. Our proposed topic detection method uses a neural network-based vector space model to capture semantic similarities between documents. We firstly represent documents within the vector space, and cluster the documents into a predefined number of clusters. The centroids of the clusters are treated as latent topics. We then represent each document as a mixture of latent topics. For evaluation purposes, we employ the active learning strategy using both our novel topic detection method and a baseline topic model (i.e., Latent Dirichlet Allocation). Results obtained demonstrate that our method is able to achieve a high sensitivity of eligible studies and a significantly reduced manual annotation cost when compared to the baseline method. This observation is consistent across two clinical and three public health reviews. The tool introduced in this work is available from https://nactem.ac.uk/pvtopic/. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  1. Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses

    NASA Astrophysics Data System (ADS)

    Arguin, L.-P.; Newman, C. M.; Stein, D. L.

    2015-10-01

    We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.

  2. Low frequency acoustic and electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Maccamy, R. C.

    1986-01-01

    This paper deals with two classes of problems arising from acoustics and electromagnetics scattering in the low frequency stations. The first class of problem is solving Helmholtz equation with Dirichlet boundary conditions on an arbitrary two dimensional body while the second one is an interior-exterior interface problem with Helmholtz equation in the exterior. Low frequency analysis show that there are two intermediate problems which solve the above problems accurate to 0(k/2/ log k) where k is the frequency. These solutions greatly differ from the zero frequency approximations. For the Dirichlet problem numerical examples are shown to verify the theoretical estimates.

  3. The first eigenvalue of the p-Laplacian on quantum graphs

    NASA Astrophysics Data System (ADS)

    Del Pezzo, Leandro M.; Rossi, Julio D.

    2016-12-01

    We study the first eigenvalue of the p-Laplacian (with 1

  4. Detecting Anisotropic Inclusions Through EIT

    NASA Astrophysics Data System (ADS)

    Cristina, Jan; Päivärinta, Lassi

    2017-12-01

    We study the evolution equation {partialtu=-Λtu} where {Λt} is the Dirichlet-Neumann operator of a decreasing family of Riemannian manifolds with boundary {Σt}. We derive a lower bound for the solution of such an equation, and apply it to a quantitative density estimate for the restriction of harmonic functions on M}=Σ_{0 to the boundaries of {partialΣt}. Consequently we are able to derive a lower bound for the difference of the Dirichlet-Neumann maps in terms of the difference of a background metrics g and an inclusion metric {g+χ_{Σ}(h-g)} on a manifold M.

  5. Analysing the health effects of simultaneous exposure to physical and chemical properties of airborne particles

    PubMed Central

    Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W.; Fuller, Gary W.

    2015-01-01

    Background Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. Methods We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002–2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. Results The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of − 3.5% (95% CI: − 0.12%, − 5.74%). Conclusions We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. PMID:25795926

  6. Analysing the health effects of simultaneous exposure to physical and chemical properties of airborne particles.

    PubMed

    Pirani, Monica; Best, Nicky; Blangiardo, Marta; Liverani, Silvia; Atkinson, Richard W; Fuller, Gary W

    2015-06-01

    Airborne particles are a complex mix of organic and inorganic compounds, with a range of physical and chemical properties. Estimation of how simultaneous exposure to air particles affects the risk of adverse health response represents a challenge for scientific research and air quality management. In this paper, we present a Bayesian approach that can tackle this problem within the framework of time series analysis. We used Dirichlet process mixture models to cluster time points with similar multipollutant and response profiles, while adjusting for seasonal cycles, trends and temporal components. Inference was carried out via Markov Chain Monte Carlo methods. We illustrated our approach using daily data of a range of particle metrics and respiratory mortality for London (UK) 2002-2005. To better quantify the average health impact of these particles, we measured the same set of metrics in 2012, and we computed and compared the posterior predictive distributions of mortality under the exposure scenario in 2012 vs 2005. The model resulted in a partition of the days into three clusters. We found a relative risk of 1.02 (95% credible intervals (CI): 1.00, 1.04) for respiratory mortality associated with days characterised by high posterior estimates of non-primary particles, especially nitrate and sulphate. We found a consistent reduction in the airborne particles in 2012 vs 2005 and the analysis of the posterior predictive distributions of respiratory mortality suggested an average annual decrease of -3.5% (95% CI: -0.12%, -5.74%). We proposed an effective approach that enabled the better understanding of hidden structures in multipollutant health effects within time series analysis. It allowed the identification of exposure metrics associated with respiratory mortality and provided a tool to assess the changes in health effects from various policies to control the ambient particle matter mixtures. Copyright © 2015. Published by Elsevier Ltd.

  7. Dirichlet boundary conditions for arbitrary-shaped boundaries in stellarator-like magnetic fields for the Flux-Coordinate Independent method

    NASA Astrophysics Data System (ADS)

    Hill, Peter; Shanahan, Brendan; Dudson, Ben

    2017-04-01

    We present a technique for handling Dirichlet boundary conditions with the Flux Coordinate Independent (FCI) parallel derivative operator with arbitrary-shaped material geometry in general 3D magnetic fields. The FCI method constructs a finite difference scheme for ∇∥ by following field lines between poloidal planes and interpolating within planes. Doing so removes the need for field-aligned coordinate systems that suffer from singularities in the metric tensor at null points in the magnetic field (or equivalently, when q → ∞). One cost of this method is that as the field lines are not on the mesh, they may leave the domain at any point between neighbouring planes, complicating the application of boundary conditions. The Leg Value Fill (LVF) boundary condition scheme presented here involves an extrapolation/interpolation of the boundary value onto the field line end point. The usual finite difference scheme can then be used unmodified. We implement the LVF scheme in BOUT++ and use the Method of Manufactured Solutions to verify the implementation in a rectangular domain, and show that it does not modify the error scaling of the finite difference scheme. The use of LVF for arbitrary wall geometry is outlined. We also demonstrate the feasibility of using the FCI approach in no n-axisymmetric configurations for a simple diffusion model in a "straight stellarator" magnetic field. A Gaussian blob diffuses along the field lines, tracing out flux surfaces. Dirichlet boundary conditions impose a last closed flux surface (LCFS) that confines the density. Including a poloidal limiter moves the LCFS to a smaller radius. The expected scaling of the numerical perpendicular diffusion, which is a consequence of the FCI method, in stellarator-like geometry is recovered. A novel technique for increasing the parallel resolution during post-processing, in order to reduce artefacts in visualisations, is described.

  8. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  9. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  10. A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.

  11. On the exterior Dirichlet problem for Hessian quotient equations

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Li, Zhisu

    2018-06-01

    In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.

  12. A three dimensional Dirichlet-to-Neumann map for surface waves over topography

    NASA Astrophysics Data System (ADS)

    Nachbin, Andre; Andrade, David

    2016-11-01

    We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.

  13. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  14. Two-point correlation function for Dirichlet L-functions

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  15. Synchronization of Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions and Infinite Delays.

    PubMed

    Sheng, Yin; Zhang, Hao; Zeng, Zhigang

    2017-10-01

    This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.

  16. Locating Temporal Functional Dynamics of Visual Short-Term Memory Binding using Graph Modular Dirichlet Energy

    NASA Astrophysics Data System (ADS)

    Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre

    2017-02-01

    Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.

  17. Strong Asymmetric Limit of the Quasi-Potential of the Boundary Driven Weakly Asymmetric Exclusion Process

    NASA Astrophysics Data System (ADS)

    Bertini, Lorenzo; Gabrielli, Davide; Landim, Claudio

    2009-07-01

    We consider the weakly asymmetric exclusion process on a bounded interval with particles reservoirs at the endpoints. The hydrodynamic limit for the empirical density, obtained in the diffusive scaling, is given by the viscous Burgers equation with Dirichlet boundary conditions. In the case in which the bulk asymmetry is in the same direction as the drift due to the boundary reservoirs, we prove that the quasi-potential can be expressed in terms of the solution to a one-dimensional boundary value problem which has been introduced by Enaud and Derrida [16]. We consider the strong asymmetric limit of the quasi-potential and recover the functional derived by Derrida, Lebowitz, and Speer [15] for the asymmetric exclusion process.

  18. Large eddy simulation of turbulent premixed combustion using tabulated detailed chemistry and presumed probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin

    2016-03-01

    A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.

  19. Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths

    NASA Astrophysics Data System (ADS)

    Chu, Jixun; Coron, Jean-Michel; Shang, Peipei

    2015-10-01

    We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.

  20. Scalar Casimir densities and forces for parallel plates in cosmic string spacetime

    NASA Astrophysics Data System (ADS)

    Bezerra de Mello, E. R.; Saharian, A. A.; Abajyan, S. V.

    2018-04-01

    We analyze the Green function, the Casimir densities and forces associated with a massive scalar quantum field confined between two parallel plates in a higher dimensional cosmic string spacetime. The plates are placed orthogonal to the string, and the field obeys the Robin boundary conditions on them. The boundary-induced contributions are explicitly extracted in the vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor for both the single plate and two plates geometries. The VEV of the energy-momentum tensor, in additional to the diagonal components, contains an off diagonal component corresponding to the shear stress. The latter vanishes on the plates in special cases of Dirichlet and Neumann boundary conditions. For points outside the string core the topological contributions in the VEVs are finite on the plates. Near the string the VEVs are dominated by the boundary-free part, whereas at large distances the boundary-induced contributions dominate. Due to the nonzero off diagonal component of the vacuum energy-momentum tensor, in addition to the normal component, the Casimir forces have nonzero component parallel to the boundary (shear force). Unlike the problem on the Minkowski bulk, the normal forces acting on the separate plates, in general, do not coincide if the corresponding Robin coefficients are different. Another difference is that in the presence of the cosmic string the Casimir forces for Dirichlet and Neumann boundary conditions differ. For Dirichlet boundary condition the normal Casimir force does not depend on the curvature coupling parameter. This is not the case for other boundary conditions. A new qualitative feature induced by the cosmic string is the appearance of the shear stress acting on the plates. The corresponding force is directed along the radial coordinate and vanishes for Dirichlet and Neumann boundary conditions. Depending on the parameters of the problem, the radial component of the shear force can be either positive or negative.

  1. A Meinardus Theorem with Multiple Singularities

    NASA Astrophysics Data System (ADS)

    Granovsky, Boris L.; Stark, Dudley

    2012-09-01

    Meinardus proved a general theorem about the asymptotics of the number of weighted partitions, when the Dirichlet generating function for weights has a single pole on the positive real axis. Continuing (Granovsky et al., Adv. Appl. Math. 41:307-328, 2008), we derive asymptotics for the numbers of three basic types of decomposable combinatorial structures (or, equivalently, ideal gas models in statistical mechanics) of size n, when their Dirichlet generating functions have multiple simple poles on the positive real axis. Examples to which our theorem applies include ones related to vector partitions and quantum field theory. Our asymptotic formula for the number of weighted partitions disproves the belief accepted in the physics literature that the main term in the asymptotics is determined by the rightmost pole.

  2. Analytical solutions for coupling fractional partial differential equations with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Ding, Xiao-Li; Nieto, Juan J.

    2017-11-01

    In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.

  3. Stability and Hopf Bifurcation in a Reaction-Diffusion Model with Chemotaxis and Nonlocal Delay Effect

    NASA Astrophysics Data System (ADS)

    Li, Dong; Guo, Shangjiang

    Chemotaxis is an observed phenomenon in which a biological individual moves preferentially toward a relatively high concentration, which is contrary to the process of natural diffusion. In this paper, we study a reaction-diffusion model with chemotaxis and nonlocal delay effect under Dirichlet boundary condition by using Lyapunov-Schmidt reduction and the implicit function theorem. The existence, multiplicity, stability and Hopf bifurcation of spatially nonhomogeneous steady state solutions are investigated. Moreover, our results are illustrated by an application to the model with a logistic source, homogeneous kernel and one-dimensional spatial domain.

  4. Clustering and variable selection in the presence of mixed variable types and missing data.

    PubMed

    Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D

    2018-05-17

    We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.

  5. SIBIS: a Bayesian model for inconsistent protein sequence estimation.

    PubMed

    Khenoussi, Walyd; Vanhoutrève, Renaud; Poch, Olivier; Thompson, Julie D

    2014-09-01

    The prediction of protein coding genes is a major challenge that depends on the quality of genome sequencing, the accuracy of the model used to elucidate the exonic structure of the genes and the complexity of the gene splicing process leading to different protein variants. As a consequence, today's protein databases contain a huge amount of inconsistency, due to both natural variants and sequence prediction errors. We have developed a new method, called SIBIS, to detect such inconsistencies based on the evolutionary information in multiple sequence alignments. A Bayesian framework, combined with Dirichlet mixture models, is used to estimate the probability of observing specific amino acids and to detect inconsistent or erroneous sequence segments. We evaluated the performance of SIBIS on a reference set of protein sequences with experimentally validated errors and showed that the sensitivity is significantly higher than previous methods, with only a small loss of specificity. We also assessed a large set of human sequences from the UniProt database and found evidence of inconsistency in 48% of the previously uncharacterized sequences. We conclude that the integration of quality control methods like SIBIS in automatic analysis pipelines will be critical for the robust inference of structural, functional and phylogenetic information from these sequences. Source code, implemented in C on a linux system, and the datasets of protein sequences are freely available for download at http://www.lbgi.fr/∼julie/SIBIS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Revealing common disease mechanisms shared by tumors of different tissues of origin through semantic representation of genomic alterations and topic modeling.

    PubMed

    Chen, Vicky; Paisley, John; Lu, Xinghua

    2017-03-14

    Cancer is a complex disease driven by somatic genomic alterations (SGAs) that perturb signaling pathways and consequently cellular function. Identifying patterns of pathway perturbations would provide insights into common disease mechanisms shared among tumors, which is important for guiding treatment and predicting outcome. However, identifying perturbed pathways is challenging, because different tumors can have the same perturbed pathways that are perturbed by different SGAs. Here, we designed novel semantic representations that capture the functional similarity of distinct SGAs perturbing a common pathway in different tumors. Combining this representation with topic modeling would allow us to identify patterns in altered signaling pathways. We represented each gene with a vector of words describing its function, and we represented the SGAs of a tumor as a text document by pooling the words representing individual SGAs. We applied the nested hierarchical Dirichlet process (nHDP) model to a collection of tumors of 5 cancer types from TCGA. We identified topics (consisting of co-occurring words) representing the common functional themes of different SGAs. Tumors were clustered based on their topic associations, such that each cluster consists of tumors sharing common functional themes. The resulting clusters contained mixtures of cancer types, which indicates that different cancer types can share disease mechanisms. Survival analysis based on the clusters revealed significant differences in survival among the tumors of the same cancer type that were assigned to different clusters. The results indicate that applying topic modeling to semantic representations of tumors identifies patterns in the combinations of altered functional pathways in cancer.

  7. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  8. The Casimir effect for parallel plates revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawakami, N. A.; Nemes, M. C.; Wreszinski, Walter F.

    2007-10-15

    The Casimir effect for a massless scalar field with Dirichlet and periodic boundary conditions (bc's) on infinite parallel plates is revisited in the local quantum field theory (lqft) framework introduced by Kay [Phys. Rev. D 20, 3052 (1979)]. The model displays a number of more realistic features than the ones he treated. In addition to local observables, as the energy density, we propose to consider intensive variables, such as the energy per unit area {epsilon}, as fundamental observables. Adopting this view, lqft rejects Dirichlet (the same result may be proved for Neumann or mixed) bc, and accepts periodic bc: inmore » the former case {epsilon} diverges, in the latter it is finite, as is shown by an expression for the local energy density obtained from lqft through the use of the Poisson summation formula. Another way to see this uses methods from the Euler summation formula: in the proof of regularization independence of the energy per unit area, a regularization-dependent surface term arises upon use of Dirichlet bc, but not periodic bc. For the conformally invariant scalar quantum field, this surface term is absent due to the condition of zero trace of the energy momentum tensor, as remarked by De Witt [Phys. Rep. 19, 295 (1975)]. The latter property does not hold in the application to the dark energy problem in cosmology, in which we argue that periodic bc might play a distinguished role.« less

  9. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations.

    PubMed

    Nicholls, David P

    2018-04-01

    The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.

  10. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  11. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations

    NASA Astrophysics Data System (ADS)

    Nicholls, David P.

    2018-04-01

    The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.

  12. Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence

    NASA Astrophysics Data System (ADS)

    Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen

    2005-11-01

    Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China

  13. Traffic Behavior Recognition Using the Pachinko Allocation Model

    PubMed Central

    Huynh-The, Thien; Banos, Oresti; Le, Ba-Vui; Bui, Dinh-Mao; Yoon, Yongik; Lee, Sungyoung

    2015-01-01

    CCTV-based behavior recognition systems have gained considerable attention in recent years in the transportation surveillance domain for identifying unusual patterns, such as traffic jams, accidents, dangerous driving and other abnormal behaviors. In this paper, a novel approach for traffic behavior modeling is presented for video-based road surveillance. The proposed system combines the pachinko allocation model (PAM) and support vector machine (SVM) for a hierarchical representation and identification of traffic behavior. A background subtraction technique using Gaussian mixture models (GMMs) and an object tracking mechanism based on Kalman filters are utilized to firstly construct the object trajectories. Then, the sparse features comprising the locations and directions of the moving objects are modeled by PAM into traffic topics, namely activities and behaviors. As a key innovation, PAM captures not only the correlation among the activities, but also among the behaviors based on the arbitrary directed acyclic graph (DAG). The SVM classifier is then utilized on top to train and recognize the traffic activity and behavior. The proposed model shows more flexibility and greater expressive power than the commonly-used latent Dirichlet allocation (LDA) approach, leading to a higher recognition accuracy in the behavior classification. PMID:26151213

  14. Numerical Study of Periodic Traveling Wave Solutions for the Predator-Prey Model with Landscape Features

    NASA Astrophysics Data System (ADS)

    Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok

    We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.

  15. Sound-turbulence interaction in transonic boundary layers

    NASA Astrophysics Data System (ADS)

    Lelostec, Ludovic; Scalo, Carlo; Lele, Sanjiva

    2014-11-01

    Acoustic wave scattering in a transonic boundary layer is investigated through a novel approach. Instead of simulating directly the interaction of an incoming oblique acoustic wave with a turbulent boundary layer, suitable Dirichlet conditions are imposed at the wall to reproduce only the reflected wave resulting from the interaction of the incident wave with the boundary layer. The method is first validated using the laminar boundary layer profiles in a parallel flow approximation. For this scattering problem an exact inviscid solution can be found in the frequency domain which requires numerical solution of an ODE. The Dirichlet conditions are imposed in a high-fidelity unstructured compressible flow solver for Large Eddy Simulation (LES), CharLESx. The acoustic field of the reflected wave is then solved and the interaction between the boundary layer and sound scattering can be studied.

  16. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  17. Heat kernel for the elliptic system of linear elasticity with boundary conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Justin; Kim, Seick; Brown, Russell

    2014-10-01

    We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.

  18. A simple way to unify multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) using a Dirichlet distribution in benefit-risk assessment.

    PubMed

    Saint-Hilary, Gaelle; Cadour, Stephanie; Robert, Veronique; Gasparini, Mauro

    2017-05-01

    Quantitative methodologies have been proposed to support decision making in drug development and monitoring. In particular, multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) are useful tools to assess the benefit-risk ratio of medicines according to the performances of the treatments on several criteria, accounting for the preferences of the decision makers regarding the relative importance of these criteria. However, even in its probabilistic form, MCDA requires the exact elicitations of the weights of the criteria by the decision makers, which may be difficult to achieve in practice. SMAA allows for more flexibility and can be used with unknown or partially known preferences, but it is less popular due to its increased complexity and the high degree of uncertainty in its results. In this paper, we propose a simple model as a generalization of MCDA and SMAA, by applying a Dirichlet distribution to the weights of the criteria and by making its parameters vary. This unique model permits to fit both MCDA and SMAA, and allows for a more extended exploration of the benefit-risk assessment of treatments. The precision of its results depends on the precision parameter of the Dirichlet distribution, which could be naturally interpreted as the strength of confidence of the decision makers in their elicitation of preferences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Multiple Positive Solutions in the Second Order Autonomous Nonlinear Boundary Value Problems

    NASA Astrophysics Data System (ADS)

    Atslega, Svetlana; Sadyrbaev, Felix

    2009-09-01

    We construct the second order autonomous equations with arbitrarily large number of positive solutions satisfying homogeneous Dirichlet boundary conditions. Phase plane approach and bifurcation of solutions are the main tools.

  20. Variational Problems with Long-Range Interaction

    NASA Astrophysics Data System (ADS)

    Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro

    2018-06-01

    We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\

  1. Bayesian correlated clustering to integrate multiple datasets

    PubMed Central

    Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.

    2012-01-01

    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558

  2. Automatic sleep classification using a data-driven topic model reveals latent sleep states.

    PubMed

    Koch, Henriette; Christensen, Julie A E; Frandsen, Rune; Zoetmulder, Marielle; Arvastson, Lars; Christensen, Soren R; Jennum, Poul; Sorensen, Helge B D

    2014-09-30

    The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. To meet the criticism and reveal the latent sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group. The model was optimized using 50 subjects and validated on 76 subjects. The optimized sleep model used six topics, and the topic probabilities changed smoothly during transitions. According to the manual scorings, the model scored an overall subject-specific accuracy of 68.3 ± 7.44 (% μ ± σ) and group specific accuracies of 69.0 ± 4.62 (control), 70.1 ± 5.10 (PLM), 67.2 ± 8.30 (iRBD) and 67.7 ± 9.07 (PD). Statistics of the latent sleep state content showed accordances to the sleep stages defined in the golden standard. However, this study indicates that sleep contains six diverse latent sleep states and that state transitions are continuous processes. The model is generally applicable and may contribute to the research in neurodegenerative diseases and sleep disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Geometric comparison of popular mixture-model distances.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Scott A.

    2010-09-01

    Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less

  4. Study of a mixed dispersal population dynamics model

    DOE PAGES

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...

    2016-08-27

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less

  5. The spectrum, radiation conditions and the Fredholm property for the Dirichlet Laplacian in a perforated plane with semi-infinite inclusions

    NASA Astrophysics Data System (ADS)

    Cardone, G.; Durante, T.; Nazarov, S. A.

    2017-07-01

    We consider the spectral Dirichlet problem for the Laplace operator in the plane Ω∘ with double-periodic perforation but also in the domain Ω• with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra σ∘ and σ• of the problems in Ω∘ and Ω•, namely we present a concrete geometry which supports the relation σ∘ ⫋σ• due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium.

  6. Dirichlet Component Regression and its Applications to Psychiatric Data.

    PubMed

    Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel

    2008-08-15

    We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.

  7. Unstable Mode Solutions to the Klein-Gordon Equation in Kerr-anti-de Sitter Spacetimes

    NASA Astrophysics Data System (ADS)

    Dold, Dominic

    2017-03-01

    For any cosmological constant {Λ = -3/ℓ2 < 0} and any {α < 9/4}, we find a Kerr-AdS spacetime {({M}, g_{KAdS})}, in which the Klein-Gordon equation {Box_{g_{KAdS}}ψ + α/ℓ2ψ = 0} has an exponentially growing mode solution satisfying a Dirichlet boundary condition at infinity. The spacetime violates the Hawking-Reall bound {r+2 > |a|ℓ}. We obtain an analogous result for Neumann boundary conditions if {5/4 < α < 9/4}. Moreover, in the Dirichlet case, one can prove that, for any Kerr-AdS spacetime violating the Hawking-Reall bound, there exists an open family of masses {α} such that the corresponding Klein-Gordon equation permits exponentially growing mode solutions. Our result adopts methods of Shlapentokh-Rothman developed in (Commun. Math. Phys. 329:859-891, 2014) and provides the first rigorous construction of a superradiant instability for negative cosmological constant.

  8. First-passage dynamics of linear stochastic interface models: weak-noise theory and influence of boundary conditions

    NASA Astrophysics Data System (ADS)

    Gross, Markus

    2018-03-01

    We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.

  9. Stereochemistry of silicon in oxygen-containing compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serezhkin, V. N., E-mail: Serezhkin@samsu.ru; Urusov, V. S.

    2017-01-15

    Specific stereochemical features of silicon in oxygen-containing compounds, including hybrid silicates with all oxygen atoms of SiO{sub n} groups ({sub n} = 4, 5, or 6) entering into the composition of organic anions or molecules, are described by characteristics of Voronoi—Dirichlet polyhedra. It is found that in rutile-like stishovite and post-stishovite phases with the structures similar to those of СаСl{sub 2}, α-PbO{sub 2}, or pyrite FeS{sub 2}, the volume of Voronoi—Dirichlet polyhedra of silicon and oxygen atoms decreases linearly with pressure increasing to 268 GPa. Based on these results, the possibility of formation of new post-stishovite phases is shown, namely,more » the fluorite-like structure (transition predicted at ~400 GPa) and a body-centered cubic lattice with statistical arrangement of silicon and oxygen atoms (~900 GPa).« less

  10. Explicit treatment for Dirichlet, Neumann and Cauchy boundary conditions in POD-based reduction of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2018-05-01

    In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.

  11. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  12. On degenerate coupled transport processes in porous media with memory phenomena

    NASA Astrophysics Data System (ADS)

    Beneš, Michal; Pažanin, Igor

    2018-06-01

    In this paper we prove the existence of weak solutions to degenerate parabolic systems arising from the fully coupled moisture movement, solute transport of dissolved species and heat transfer through porous materials. Physically relevant mixed Dirichlet-Neumann boundary conditions and initial conditions are considered. Existence of a global weak solution of the problem is proved by means of semidiscretization in time, proving necessary uniform estimates and by passing to the limit from discrete approximations. Degeneration occurs in the nonlinear transport coefficients which are not assumed to be bounded below and above by positive constants. Degeneracies in transport coefficients are overcome by proving suitable a-priori $L^{\\infty}$-estimates based on De Giorgi and Moser iteration technique.

  13. A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.

    PubMed

    Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert

    2017-01-01

    We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.

  14. Casimir interaction between spheres in ( D + 1)-dimensional Minkowski spacetime

    NASA Astrophysics Data System (ADS)

    Teo, L. P.

    2014-05-01

    We consider the Casimir interaction between two spheres in ( D + 1)-dimensional Minkowski spacetime due to the vacuum fluctuations of scalar fields. We consider combinations of Dirichlet and Neumann boundary conditions. The TGTG formula of the Casimir interaction energy is derived. The computations of the T matrices of the two spheres are straightforward. To compute the two G matrices, known as translation matrices, which relate the hyper-spherical waves in two spherical coordinate frames differ by a translation, we generalize the operator approach employed in [39]. The result is expressed in terms of an integral over Gegenbauer polynomials. In contrast to the D=3 case, we do not re-express the integral in terms of 3 j-symbols and hyper-spherical waves, which in principle, can be done but does not simplify the formula. Using our expression for the Casimir interaction energy, we derive the large separation and small separation asymptotic expansions of the Casimir interaction energy. In the large separation regime, we find that the Casimir interaction energy is of order L -2 D+3, L -2 D+1 and L -2 D-1 respectively for Dirichlet-Dirichlet, Dirichlet-Neumann and Neumann-Neumann boundary conditions, where L is the center-to-center distance of the two spheres. In the small separation regime, we confirm that the leading term of the Casimir interaction agrees with the proximity force approximation, which is of order , where d is the distance between the two spheres. Another main result of this work is the analytic computations of the next-to-leading order term in the small separation asymptotic expansion. This term is computed using careful order analysis as well as perturbation method. In the case the radius of one of the sphere goes to infinity, we find that the results agree with the one we derive for sphere-plate configuration. When D=3, we also recover previously known results. We find that when D is large, the ratio of the next-to-leading order term to the leading order term is linear in D, indicating a larger correction at higher dimensions. The methodologies employed in this work and the results obtained can be used to study the one-loop effective action of the system of two spherical objects in the universe.

  15. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction

    PubMed Central

    Griffin, William A.; Li, Xun

    2016-01-01

    Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319

  16. Boundary conditions in Chebyshev and Legendre methods

    NASA Technical Reports Server (NTRS)

    Canuto, C.

    1984-01-01

    Two different ways of treating non-Dirichlet boundary conditions in Chebyshev and Legendre collocation methods are discussed for second order differential problems. An error analysis is provided. The effect of preconditioning the corresponding spectral operators by finite difference matrices is also investigated.

  17. Bacterial diversity among four healthcare-associated institutes in Taiwan.

    PubMed

    Chen, Chang-Hua; Lin, Yaw-Ling; Chen, Kuan-Hsueh; Chen, Wen-Pei; Chen, Zhao-Feng; Kuo, Han-Yueh; Hung, Hsueh-Fen; Tang, Chuan Yi; Liou, Ming-Li

    2017-08-15

    Indoor microbial communities have important implications for human health, especially in health-care institutes (HCIs). The factors that determine the diversity and composition of microbiomes in a built environment remain unclear. Herein, we used 16S rRNA amplicon sequencing to investigate the relationships between building attributes and surface bacterial communities among four HCIs located in three buildings. We examined the surface bacterial communities and environmental parameters in the buildings supplied with different ventilation types and compared the results using a Dirichlet multinomial mixture (DMM)-based approach. A total of 203 samples from the four HCIs were analyzed. Four bacterial communities were grouped using the DMM-based approach, which were highly similar to those in the 4 HCIs. The α-diversity and β-diversity in the naturally ventilated building were different from the conditioner-ventilated building. The bacterial source composition varied across each building. Nine genera were found as the core microbiota shared by all the areas, of which Acinetobacter, Enterobacter, Pseudomonas, and Staphylococcus are regarded as healthcare-associated pathogens (HAPs). The observed relationship between environmental parameters such as core microbiota and surface bacterial diversity suggests that we might manage indoor environments by creating new sanitation protocols, adjusting the ventilation design, and further understanding the transmission routes of HAPs.

  18. Analytical Solutions for an Escape Problem in a Disc with an Arbitrary Distribution of Exit Holes Along Its Boundary

    NASA Astrophysics Data System (ADS)

    Marshall, J. S.

    2016-12-01

    We analytically construct solutions for the mean first-passage time and splitting probabilities for the escape problem of a particle moving with continuous Brownian motion in a confining planar disc with an arbitrary distribution (i.e., of any number, size and spacing) of exit holes/absorbing sections along its boundary. The governing equations for these quantities are Poisson's equation with a (non-zero) constant forcing term and Laplace's equation, respectively, and both are subject to a mixture of homogeneous Neumann and Dirichlet boundary conditions. Our solutions are expressed as explicit closed formulae written in terms of a parameterising variable via a conformal map, using special transcendental functions that are defined in terms of an associated Schottky group. They are derived by exploiting recent results for a related problem of fluid mechanics that describes a unidirectional flow over "no-slip/no-shear" surfaces, as well as results from potential theory, all of which were themselves derived using the same theory of Schottky groups. They are exact up to the determination of a finite set of mapping parameters, which is performed numerically. Their evaluation also requires the numerical inversion of the parameterising conformal map. Computations for a series of illustrative examples are also presented.

  19. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot

    PubMed Central

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389

  20. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.

    PubMed

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.

  1. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  2. A characteristic based volume penalization method for general evolution problems applied to compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.

    2014-04-01

    In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.

  3. A numerical technique for linear elliptic partial differential equations in polygonal domains.

    PubMed

    Hashemzadeh, P; Fokas, A S; Smitheman, S A

    2015-03-08

    Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.

  4. Nonparametric Bayesian models for a spatial covariance.

    PubMed

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  5. Mixture and odorant processing in the olfactory systems of insects: a comparative perspective.

    PubMed

    Clifford, Marie R; Riffell, Jeffrey A

    2013-11-01

    Natural olfactory stimuli are often complex mixtures of volatiles, of which the identities and ratios of constituents are important for odor-mediated behaviors. Despite this importance, the mechanism by which the olfactory system processes this complex information remains an area of active study. In this review, we describe recent progress in how odorants and mixtures are processed in the brain of insects. We use a comparative approach toward contrasting olfactory coding and the behavioral efficacy of mixtures in different insect species, and organize these topics around four sections: (1) Examples of the behavioral efficacy of odor mixtures and the olfactory environment; (2) mixture processing in the periphery; (3) mixture coding in the antennal lobe; and (4) evolutionary implications and adaptations for olfactory processing. We also include pertinent background information about the processing of individual odorants and comparative differences in wiring and anatomy, as these topics have been richly investigated and inform the processing of mixtures in the insect olfactory system. Finally, we describe exciting studies that have begun to elucidate the role of the processing of complex olfactory information in evolution and speciation.

  6. DUTIR at TREC 2009: Chemical IR Track

    DTIC Science & Technology

    2009-11-01

    We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query

  7. Modifications to holographic entanglement entropy in warped CFT

    NASA Astrophysics Data System (ADS)

    Song, Wei; Wen, Qiang; Xu, Jianfei

    2017-02-01

    In [1] it was observed that asymptotic boundary conditions play an important role in the study of holographic entanglement beyond AdS/CFT. In particular, the Ryu-Takayanagi proposal must be modified for warped AdS3 (WAdS3) with Dirichlet boundary conditions. In this paper, we consider AdS3 and WAdS3 with Dirichlet-Neumann boundary conditions. The conjectured holographic duals are warped conformal field theories (WCFTs), featuring a Virasoro-Kac-Moody algebra. We provide a holographic calculation of the entanglement entropy and Rényi entropy using AdS3/WCFT and WAdS3/WCFT dualities. Our bulk results are consistent with the WCFT results derived by Castro-Hofman-Iqbal using the Rindler method. Comparing with [1], we explicitly show that the holographic entanglement entropy is indeed affected by boundary conditions. Both results differ from the Ryu-Takayanagi proposal, indicating new relations between spacetime geometry and quantum entanglement for holographic dualities beyond AdS/CFT.

  8. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  9. Dirichlet Component Regression and its Applications to Psychiatric Data

    PubMed Central

    Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel

    2011-01-01

    Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Dr. Li; Cui, Xiaohui; Cemerlic, Alma

    Ad hoc networks are very helpful in situations when no fixed network infrastructure is available, such as natural disasters and military conflicts. In such a network, all wireless nodes are equal peers simultaneously serving as both senders and routers for other nodes. Therefore, how to route packets through reliable paths becomes a fundamental problems when behaviors of certain nodes deviate from wireless ad hoc routing protocols. We proposed a novel Dirichlet reputation model based on Bayesian inference theory which evaluates reliability of each node in terms of packet delivery. Our system offers a way to predict and select a reliablemore » path through combination of first-hand observation and second-hand reputation reports. We also proposed moving window mechanism which helps to adjust ours responsiveness of our system to changes of node behaviors. We integrated the Dirichlet reputation into routing protocol of wireless ad hoc networks. Our extensive simulation indicates that our proposed reputation system can improve good throughput of the network and reduce negative impacts caused by misbehaving nodes.« less

  11. Positivity and Almost Positivity of Biharmonic Green's Functions under Dirichlet Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Grunau, Hans-Christoph; Robert, Frédéric

    2010-03-01

    In general, for higher order elliptic equations and boundary value problems like the biharmonic equation and the linear clamped plate boundary value problem, neither a maximum principle nor a comparison principle or—equivalently—a positivity preserving property is available. The problem is rather involved since the clamped boundary conditions prevent the boundary value problem from being reasonably written as a system of second order boundary value problems. It is shown that, on the other hand, for bounded smooth domains {Ω subsetmathbb{R}^n} , the negative part of the corresponding Green’s function is “small” when compared with its singular positive part, provided {n≥q 3} . Moreover, the biharmonic Green’s function in balls {Bsubsetmathbb{R}^n} under Dirichlet (that is, clamped) boundary conditions is known explicitly and is positive. It has been known for some time that positivity is preserved under small regular perturbations of the domain, if n = 2. In the present paper, such a stability result is proved for {n≥q 3}.

  12. New solutions to the constant-head test performed at a partially penetrating well

    NASA Astrophysics Data System (ADS)

    Chang, Y. C.; Yeh, H. D.

    2009-05-01

    SummaryThe mathematical model describing the aquifer response to a constant-head test performed at a fully penetrating well can be easily solved by the conventional integral transform technique. In addition, the Dirichlet-type condition should be chosen as the boundary condition along the rim of wellbore for such a test well. However, the boundary condition for a test well with partial penetration must be considered as a mixed-type condition. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The model for such a mixed boundary problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the dual series equations and perturbation method. This approach provides analytical results for the drawdown in the partially penetrating well and the well discharge along the screen. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  13. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  14. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  15. Exact harmonic solutions to Guyer-Krumhansl-type equation and application to heat transport in thin films

    NASA Astrophysics Data System (ADS)

    Zhukovsky, K.; Oskolkov, D.

    2018-03-01

    A system of hyperbolic-type inhomogeneous differential equations (DE) is considered for non-Fourier heat transfer in thin films. Exact harmonic solutions to Guyer-Krumhansl-type heat equation and to the system of inhomogeneous DE are obtained in Cauchy- and Dirichlet-type conditions. The contribution of the ballistic-type heat transport, of the Cattaneo heat waves and of the Fourier heat diffusion is discussed and compared with each other in various conditions. The application of the study to the ballistic heat transport in thin films is performed. Rapid evolution of the ballistic quasi-temperature component in low-dimensional systems is elucidated and compared with slow evolution of its diffusive counterpart. The effect of the ballistic quasi-temperature component on the evolution of the complete quasi-temperature is explored. In this context, the influence of the Knudsen number and of Cauchy- and Dirichlet-type conditions on the evolution of the temperature distribution is explored. The comparative analysis of the obtained solutions is performed.

  16. Exclusion Process with Slow Boundary

    NASA Astrophysics Data System (ADS)

    Baldasso, Rangel; Menezes, Otávio; Neumann, Adriana; Souza, Rafael R.

    2017-06-01

    We study the hydrodynamic and the hydrostatic behavior of the simple symmetric exclusion process with slow boundary. The term slow boundary means that particles can be born or die at the boundary sites, at a rate proportional to N^{-θ }, where θ > 0 and N is the scaling parameter. In the bulk, the particles exchange rate is equal to 1. In the hydrostatic scenario, we obtain three different linear profiles, depending on the value of the parameter θ ; in the hydrodynamic scenario, we obtain that the time evolution of the spatial density of particles, in the diffusive scaling, is given by the weak solution of the heat equation, with boundary conditions that depend on θ . If θ \\in (0,1), we get Dirichlet boundary conditions, (which is the same behavior if θ =0, see Farfán in Hydrostatics, statical and dynamical large deviations of boundary driven gradient symmetric exclusion processes, 2008); if θ =1, we get Robin boundary conditions; and, if θ \\in (1,∞), we get Neumann boundary conditions.

  17. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  18. Automated airplane surface generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.E.; Cordero, Y.; Jones, W.

    1996-12-31

    An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less

  19. Bounded Partial Sums?

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Wolverton, Robert H.

    2008-01-01

    In this article we consider an example suitable for investigation in many mid and upper level undergraduate mathematics courses. Fourier series provide an excellent example of the differences between uniform and non-uniform convergence. We use Dirichlet's test to investigate the convergence of the Fourier series for a simple periodic saw tooth…

  20. Linguistic Extensions of Topic Models

    ERIC Educational Resources Information Center

    Boyd-Graber, Jordan

    2010-01-01

    Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…

  1. Robin Gravity

    NASA Astrophysics Data System (ADS)

    Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.

    2017-08-01

    We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manjunath, Naren; Samajdar, Rhine; Jain, Sudhir R., E-mail: srjain@barc.gov.in

    Recently, the nodal domain counts of planar, integrable billiards with Dirichlet boundary conditions were shown to satisfy certain difference equations in Samajdar and Jain (2014). The exact solutions of these equations give the number of domains explicitly. For complete generality, we demonstrate this novel formulation for three additional separable systems and thus extend the statement to all integrable billiards.

  3. A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications

    NASA Astrophysics Data System (ADS)

    Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.

    2018-04-01

    We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.

  4. The use of MACSYMA for solving elliptic boundary value problems

    NASA Technical Reports Server (NTRS)

    Thejll, Peter; Gilbert, Robert P.

    1990-01-01

    A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.

  5. Test Design Project: Studies in Test Adequacy. Annual Report.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…

  6. Solution of a Nonlinear Heat Conduction Equation for a Curvilinear Region with Dirichlet Conditions by the Fast-Expansion Method

    NASA Astrophysics Data System (ADS)

    Chernyshov, A. D.

    2018-05-01

    The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.

  7. Pig Data and Bayesian Inference on Multinomial Probabilities

    ERIC Educational Resources Information Center

    Kern, John C.

    2006-01-01

    Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…

  8. Comment Data Mining to Estimate Student Performance Considering Consecutive Lessons

    ERIC Educational Resources Information Center

    Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori

    2017-01-01

    The purpose of this study is to examine different formats of comment data to predict student performance. Having students write comment data after every lesson can reflect students' learning attitudes, tendencies and learning activities involved with the lesson. In this research, Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic…

  9. The Effect of Multigrid Parameters in a 3D Heat Diffusion Equation

    NASA Astrophysics Data System (ADS)

    Oliveira, F. De; Franco, S. R.; Pinto, M. A. Villela

    2018-02-01

    The aim of this paper is to reduce the necessary CPU time to solve the three-dimensional heat diffusion equation using Dirichlet boundary conditions. The finite difference method (FDM) is used to discretize the differential equations with a second-order accuracy central difference scheme (CDS). The algebraic equations systems are solved using the lexicographical and red-black Gauss-Seidel methods, associated with the geometric multigrid method with a correction scheme (CS) and V-cycle. Comparisons are made between two types of restriction: injection and full weighting. The used prolongation process is the trilinear interpolation. This work is concerned with the study of the influence of the smoothing value (v), number of mesh levels (L) and number of unknowns (N) on the CPU time, as well as the analysis of algorithm complexity.

  10. The Riemann-Hilbert approach to the Helmholtz equation in a quarter-plane: Neumann, Robin and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Its, Alexander; Its, Elizabeth

    2018-04-01

    We revisit the Helmholtz equation in a quarter-plane in the framework of the Riemann-Hilbert approach to linear boundary value problems suggested in late 1990s by A. Fokas. We show the role of the Sommerfeld radiation condition in Fokas' scheme.

  11. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  12. ANALYTICAL SOLUTIONS OF THE ATMOSPHERIC DIFFUSION EQUATION WITH MULTIPLE SOURCES AND HEIGHT-DEPENDENT WIND SPEED AND EDDY DIFFUSIVITIES. (R825689C072)

    EPA Science Inventory

    Abstract

    Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...

  13. ANALYTICAL SOLUTIONS OF THE ATMOSPHERIC DIFFUSION EQUATION WITH MULTIPLE SOURCES AND HEIGHT-DEPENDENT WIND SPEED AND EDDY DIFFUSIVITIES. (R825689C048)

    EPA Science Inventory

    Abstract

    Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...

  14. Boundary conditions and formation of pure spin currents in magnetic field

    NASA Astrophysics Data System (ADS)

    Eliashvili, Merab; Tsitsishvili, George

    2017-09-01

    Schrödinger equation for an electron confined to a two-dimensional strip is considered in the presence of homogeneous orthogonal magnetic field. Since the system has edges, the eigenvalue problem is supplied by the boundary conditions (BC) aimed in preventing the leakage of matter away across the edges. In the case of spinless electrons the Dirichlet and Neumann BC are considered. The Dirichlet BC result in the existence of charge carrying edge states. For the Neumann BC each separate edge comprises two counterflow sub-currents which precisely cancel out each other provided the system is populated by electrons up to certain Fermi level. Cancelation of electric current is a good starting point for developing the spin-effects. In this scope we reconsider the problem for a spinning electron with Rashba coupling. The Neumann BC are replaced by Robin BC. Again, the two counterflow electric sub-currents cancel out each other for a separate edge, while the spin current survives thus modeling what is known as pure spin current - spin flow without charge flow.

  15. Inverse scattering for an exterior Dirichlet program

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1981-01-01

    Scattering due to a metallic cylinder which is in the field of a wire carrying a periodic current is considered. The location and shape of the cylinder is obtained with a far field measurement in between the wire and the cylinder. The same analysis is applicable in acoustics in the situation that the cylinder is a soft wall body and the wire is a line source. The associated direct problem in this situation is an exterior Dirichlet problem for the Helmholtz equation in two dimensions. An improved low frequency estimate for the solution of this problem using integral equation methods is presented. The far field measurements are related to the solutions of boundary integral equations in the low frequency situation. These solutions are expressed in terms of mapping function which maps the exterior of the unknown curve onto the exterior of a unit disk. The coefficients of the Laurent expansion of the conformal transformations are related to the far field coefficients. The first far field coefficient leads to the calculation of the distance between the source and the cylinder.

  16. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.

    PubMed

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu

    2017-07-01

    In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.

  17. Direct regeneration of recycled cathode material mixture from scrapped LiFePO4 batteries

    NASA Astrophysics Data System (ADS)

    Li, Xuelei; Zhang, Jin; Song, Dawei; Song, Jishun; Zhang, Lianqi

    2017-03-01

    A new green recycling process (named as direct regeneration process) of cathode material mixture from scrapped LiFePO4 batteries is designed for the first time. Through this direct regeneration process, high purity cathode material mixture (LiFePO4 + acetylene black), anode material mixture (graphite + acetylene black) and other by-products (shell, Al foil, Cu foil and electrolyte solvent, etc.) are recycled from scrapped LiFePO4 batteries with high yield. Subsequently, recycled cathode material mixture without acid leaching is further directly regenerated with Li2CO3. Direct regeneration procedure of recycled cathode material mixture from 600 to 800 °C is investigated in detail. Cathode material mixture regenerated at 650 °C display excellent physical, chemical and electrochemical performances, which meet the reuse requirement for middle-end Li-ion batteries. The results indicate the green direct regeneration process with low-cost and high added-value is feasible.

  18. Processing of odor mixtures in the zebrafish olfactory bulb.

    PubMed

    Tabor, Rico; Yaksi, Emre; Weislogel, Jan-Marek; Friedrich, Rainer W

    2004-07-21

    Components of odor mixtures often are not perceived individually, suggesting that neural representations of mixtures are not simple combinations of the representations of the components. We studied odor responses to binary mixtures of amino acids and food extracts at different processing stages in the olfactory bulb (OB) of zebrafish. Odor-evoked input to the OB was measured by imaging Ca2+ signals in afferents to olfactory glomeruli. Activity patterns evoked by mixtures were predictable within narrow limits from the component patterns, indicating that mixture interactions in the peripheral olfactory system are weak. OB output neurons, the mitral cells (MCs), were recorded extra- and intracellularly and responded to odors with stimulus-dependent temporal firing rate modulations. Responses to mixtures of amino acids often were dominated by one of the component responses. Responses to mixtures of food extracts, in contrast, were more distinct from both component responses. These results show that mixture interactions can result from processing in the OB. Moreover, our data indicate that mixture interactions in the OB become more pronounced with increasing overlap of input activity patterns evoked by the components. Emerging from these results are rules of mixture interactions that may explain behavioral data and provide a basis for understanding the processing of natural odor stimuli in the OB.

  19. Process for producing an activated carbon adsorbent with integral heat transfer apparatus

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor); Yavrouian, Andre H. (Inventor)

    1996-01-01

    A process for producing an integral adsorbent-heat exchanger apparatus useful in ammonia refrigerant heat pump systems. In one embodiment, the process wets an activated carbon particles-solvent mixture with a binder-solvent mixture, presses the binder wetted activated carbon mixture on a metal tube surface and thereafter pyrolyzes the mixture to form a bonded activated carbon matrix adjoined to the tube surface. The integral apparatus can be easily and inexpensively produced by the process in large quantities.

  20. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  1. The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.

    PubMed

    Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng

    2014-07-01

    Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  2. Atmospheric effect in three-space scenario for the Stokes-Helmert method of geoid determination

    NASA Astrophysics Data System (ADS)

    Yang, H.; Tenzer, R.; Vanicek, P.; Santos, M.

    2004-05-01

    : According to the Stokes-Helmert method for the geoid determination by Vanicek and Martinec (1994) and Vanicek et al. (1999), the Helmert gravity anomalies are computed at the earth surface. To formulate the fundamental formula of physical geodesy, Helmert's gravity anomalies are then downward continued from the earth surface onto the geoid. This procedure, i.e., the inverse Dirichlet's boundary value problem, is realized by solving the Poisson integral equation. The above mentioned "classical" approach can be modified so that the inverse Dirichlet's boundary value problem is solved in the No Topography (NT) space (Vanicek et al., 2004) instead of in the Helmert (H) space. This technique has been introduced by Vanicek et al. (2003) and was used by Tenzer and Vanicek (2003) for the determination of the geoid in the region of the Canadian Rocky Mountains. According to this new approach, the gravity anomalies referred to the earth surface are first transformed into the NT-space. This transformation is realized by subtracting the gravitational attraction of topographical and atmospheric masses from the gravity anomalies at the earth surface. Since the NT-anomalies are harmonic above the geoid, the Dirichlet boundary value problem is solved in the NT-space instead of the Helmert space according to the standard formulation. After being obtained on the geoid, the NT-anomalies are transformed into the H-space to minimize the indirect effect on the geoidal heights. This step, i.e., transformation from NT-space to H-space is realized by adding the gravitational attraction of condensed topographical and condensed atmospheric masses to the NT-anomalies at the geoid. The effects of atmosphere in the standard Stokes-Helmert method was intensively investigated by Sjöberg (1998 and 1999), and Novák (2000). In this presentation, the effect of the atmosphere in the three-space scenario for the Stokes-Helmert method is discussed and the numerical results over Canada are shown. Key words: Atmosphere - Geoid - Gravity

  3. Separation process using microchannel technology

    DOEpatents

    Tonkovich, Anna Lee [Dublin, OH; Perry, Steven T [Galloway, OH; Arora, Ravi [Dublin, OH; Qiu, Dongming [Bothell, WA; Lamont, Michael Jay [Hilliard, OH; Burwell, Deanna [Cleveland Heights, OH; Dritz, Terence Andrew [Worthington, OH; McDaniel, Jeffrey S [Columbus, OH; Rogers, Jr; William, A [Marysville, OH; Silva, Laura J [Dublin, OH; Weidert, Daniel J [Lewis Center, OH; Simmons, Wayne W [Dublin, OH; Chadwell, G Bradley [Reynoldsburg, OH

    2009-03-24

    The disclosed invention relates to a process and apparatus for separating a first fluid from a fluid mixture comprising the first fluid. The process comprises: (A) flowing the fluid mixture into a microchannel separator in contact with a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the first fluid is sorbed by the sorption medium, removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing first fluid from the sorption medium and removing desorbed first fluid from the microchannel separator. The process and apparatus are suitable for separating nitrogen or methane from a fluid mixture comprising nitrogen and methane. The process and apparatus may be used for rejecting nitrogen in the upgrading of sub-quality methane.

  4. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  5. Infinite hidden conditional random fields for human behavior analysis.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja

    2013-01-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.

  6. Membrane permeation process for dehydration of organic liquid mixtures using sulfonated ion-exchange polyalkene membranes

    DOEpatents

    Cabasso, Israel; Korngold, Emmanuel

    1988-01-01

    A membrane permeation process for dehydrating a mixture of organic liquids, such as alcohols or close boiling, heat sensitive mixtures. The process comprises causing a component of the mixture to selectively sorb into one side of sulfonated ion-exchange polyalkene (e.g., polyethylene) membranes and selectively diffuse or flow therethrough, and then desorbing the component into a gas or liquid phase on the other side of the membranes.

  7. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  8. Local recovery of the compressional and shear speeds from the hyperbolic DN map

    NASA Astrophysics Data System (ADS)

    Stefanov, Plamen; Uhlmann, Gunther; Vasy, Andras

    2018-01-01

    We study the isotropic elastic wave equation in a bounded domain with boundary. We show that local knowledge of the Dirichlet-to-Neumann map determines uniquely the speed of the p-wave locally if there is a strictly convex foliation with respect to it, and similarly for the s-wave speed.

  9. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    ERIC Educational Resources Information Center

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  10. Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation

    NASA Astrophysics Data System (ADS)

    Sun, Linan; Shi, Junping; Wang, Yuwen

    2013-08-01

    In this paper, we consider a dynamical model of population biology which is of the classical Fisher type, but the competition interaction between individuals is nonlocal. The existence, uniqueness, and stability of the steady state solution of the nonlocal problem on a bounded interval with homogeneous Dirichlet boundary conditions are studied.

  11. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  12. Quantum field between moving mirrors: A three dimensional example

    NASA Technical Reports Server (NTRS)

    Hacyan, S.; Jauregui, Roco; Villarreal, Carlos

    1995-01-01

    The scalar quantum field uniformly moving plates in three dimensional space is studied. Field equations for Dirichlet boundary conditions are solved exactly. Comparison of the resulting wavefunctions with their instantaneous static counterpart is performed via Bogolubov coefficients. Unlike the one dimensional problem, 'particle' creation as well as squeezing may occur. The time dependent Casimir energy is also evaluated.

  13. Odourant dominance in olfactory mixture processing: what makes a strong odourant?

    PubMed Central

    Schubert, Marco; Sandoz, Jean-Christophe; Galizia, Giovanni; Giurfa, Martin

    2015-01-01

    The question of how animals process stimulus mixtures remains controversial as opposing views propose that mixtures are processed analytically, as the sum of their elements, or holistically, as unique entities different from their elements. Overshadowing is a widespread phenomenon that can help decide between these alternatives. In overshadowing, an individual trained with a binary mixture learns one element better at the expense of the other. Although element salience (learning success) has been suggested as a main explanation for overshadowing, the mechanisms underlying this phenomenon remain unclear. We studied olfactory overshadowing in honeybees to uncover the mechanisms underlying olfactory-mixture processing. We provide, to our knowledge, the most comprehensive dataset on overshadowing to date based on 90 experimental groups involving more than 2700 bees trained either with six odourants or with their resulting 15 binary mixtures. We found that bees process olfactory mixtures analytically and that salience alone cannot predict overshadowing. After normalizing learning success, we found that an unexpected feature, the generalization profile of an odourant, was determinant for overshadowing. Odourants that induced less generalization enhanced their distinctiveness and became dominant in the mixture. Our study thus uncovers features that determine odourant dominance within olfactory mixtures and allows the referring of this phenomenon to differences in neural activity both at the receptor and the central level in the insect nervous system. PMID:25652840

  14. Supercritical separation process for complex organic mixtures

    DOEpatents

    Chum, Helena L.; Filardo, Giuseppe

    1990-01-01

    A process is disclosed for separating low molecular weight components from complex aqueous organic mixtures. The process includes preparing a separation solution of supercritical carbon dioxide with an effective amount of an entrainer to modify the solvation power of the supercritical carbon dioxide and extract preselected low molecular weight components. The separation solution is maintained at a temperature of at least about 70.degree. C. and a pressure of at least about 1,500 psi. The separation solution is then contacted with the organic mixtures while maintaining the temperature and pressure as above until the mixtures and solution reach equilibrium to extract the preselected low molecular weight components from the organic mixtures. Finally, the entrainer/extracted components portion of the equilibrium mixture is isolated from the separation solution.

  15. Einstein-Gauss-Bonnet theory of gravity: The Gauss-Bonnet-Katz boundary term

    NASA Astrophysics Data System (ADS)

    Deruelle, Nathalie; Merino, Nelson; Olea, Rodrigo

    2018-05-01

    We propose a boundary term to the Einstein-Gauss-Bonnet action for gravity, which uses the Chern-Weil theorem plus a dimensional continuation process, such that the extremization of the full action yields the equations of motion when Dirichlet boundary conditions are imposed. When translated into tensorial language, this boundary term is the generalization to this theory of the Katz boundary term and vector for general relativity. The boundary term constructed in this paper allows to deal with a general background and is not equivalent to the Gibbons-Hawking-Myers boundary term. However, we show that they coincide if one replaces the background of the Katz procedure by a product manifold. As a first application we show that this Einstein Gauss-Bonnet Katz action yields, without any extra ingredients, the expected mass of the Boulware-Deser black hole.

  16. A Probabilistic Approach to Interior Regularity of Fully Nonlinear Degenerate Elliptic Equations in Smooth Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Wei, E-mail: zhoux123@umn.edu

    2013-06-15

    We consider the value function of a stochastic optimal control of degenerate diffusion processes in a domain D. We study the smoothness of the value function, under the assumption of the non-degeneracy of the diffusion term along the normal to the boundary and an interior condition weaker than the non-degeneracy of the diffusion term. When the diffusion term, drift term, discount factor, running payoff and terminal payoff are all in the class of C{sup 1,1}( D-bar ) , the value function turns out to be the unique solution in the class of C{sub loc}{sup 1,1}(D) Intersection C{sup 0,1}( D-bar )more » to the associated degenerate Bellman equation with Dirichlet boundary data. Our approach is probabilistic.« less

  17. Supercritical separation process for complex organic mixtures

    DOEpatents

    Chum, H.L.; Filardo, G.

    1990-10-23

    A process is disclosed for separating low molecular weight components from complex aqueous organic mixtures. The process includes preparing a separation solution of supercritical carbon dioxide with an effective amount of an entrainer to modify the solvation power of the supercritical carbon dioxide and extract preselected low molecular weight components. The separation solution is maintained at a temperature of at least about 70 C and a pressure of at least about 1,500 psi. The separation solution is then contacted with the organic mixtures while maintaining the temperature and pressure as above until the mixtures and solution reach equilibrium to extract the preselected low molecular weight components from the organic mixtures. Finally, the entrainer/extracted components portion of the equilibrium mixture is isolated from the separation solution. 1 fig.

  18. Process for separating nitrogen from methane using microchannel process technology

    DOEpatents

    Tonkovich, Anna Lee [Marysville, OH; Qiu, Dongming [Dublin, OH; Dritz, Terence Andrew [Worthington, OH; Neagle, Paul [Westerville, OH; Litt, Robert Dwayne [Westerville, OH; Arora, Ravi [Dublin, OH; Lamont, Michael Jay [Hilliard, OH; Pagnotto, Kristina M [Cincinnati, OH

    2007-07-31

    The disclosed invention relates to a process for separating methane or nitrogen from a fluid mixture comprising methane and nitrogen, the process comprising: (A) flowing the fluid mixture into a microchannel separator, the microchannel separator comprising a plurality of process microchannels containing a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the methane or nitrogen is sorbed by the sorption medium, and removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing the methane or nitrogen from the sorption medium and removing the desorbed methane or nitrogen from the microchannel separator. The process is suitable for upgrading methane from coal mines, landfills, and other sub-quality sources.

  19. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  20. Rotationally symmetric viscous gas flows

    NASA Astrophysics Data System (ADS)

    Weigant, W.; Plotnikov, P. I.

    2017-03-01

    The Dirichlet boundary value problem for the Navier-Stokes equations of a barotropic viscous compressible fluid is considered. The flow region and the data of the problem are assumed to be invariant under rotations about a fixed axis. The existence of rotationally symmetric weak solutions for all adiabatic exponents from the interval (γ*,∞) with a critical exponent γ* < 4/3 is proved.

  1. Thermoelectric DC conductivities in hyperscaling violating Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Cremonini, Sera; Cvetič, Mirjam; Papadimitriou, Ioannis

    2018-04-01

    We analytically compute the thermoelectric conductivities at zero frequency (DC) in the holographic dual of a four dimensional Einstein-Maxwell-Axion-Dilaton theory that admits a class of asymptotically hyperscaling violating Lifshitz backgrounds with a dynamical exponent z and hyperscaling violating parameter θ. We show that the heat current in the dual Lifshitz theory involves the energy flux, which is an irrelevant operator for z > 1. The linearized fluctuations relevant for computing the thermoelectric conductivities turn on a source for this irrelevant operator, leading to several novel and non-trivial aspects in the holographic renormalization procedure and the identification of the physical observables in the dual theory. Moreover, imposing Dirichlet or Neumann boundary conditions on the spatial components of one of the two Maxwell fields present leads to different thermoelectric conductivities. Dirichlet boundary conditions reproduce the thermoelectric DC conductivities obtained from the near horizon analysis of Donos and Gauntlett, while Neumann boundary conditions result in a new set of DC conductivities. We make preliminary analytical estimates for the temperature behavior of the thermoelectric matrix in appropriate regions of parameter space. In particular, at large temperatures we find that the only case which could lead to a linear resistivity ρ ˜ T corresponds to z = 4 /3.

  2. Repulsive Casimir effect from extra dimensions and Robin boundary conditions: From branes to pistons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizalde, E.; Odintsov, S. D.; Institucio Catalana de Recerca i Estudis Avanccats

    2009-03-15

    We evaluate the Casimir energy and force for a massive scalar field with general curvature coupling parameter, subject to Robin boundary conditions on two codimension-one parallel plates, located on a (D+1)-dimensional background spacetime with an arbitrary internal space. The most general case of different Robin coefficients on the two separate plates is considered. With independence of the geometry of the internal space, the Casimir forces are seen to be attractive for special cases of Dirichlet or Neumann boundary conditions on both plates and repulsive for Dirichlet boundary conditions on one plate and Neumann boundary conditions on the other. For Robinmore » boundary conditions, the Casimir forces can be either attractive or repulsive, depending on the Robin coefficients and the separation between the plates, what is actually remarkable and useful. Indeed, we demonstrate the existence of an equilibrium point for the interplate distance, which is stabilized due to the Casimir force, and show that stability is enhanced by the presence of the extra dimensions. Applications of these properties in braneworld models are discussed. Finally, the corresponding results are generalized to the geometry of a piston of arbitrary cross section.« less

  3. Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia

    NASA Astrophysics Data System (ADS)

    Putri, IR; Kusumaningrum, R.

    2017-01-01

    The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.

  4. Synthesis and X-ray Crystallography of [Mg(H2O)6][AnO2(C2H5COO)3]2 (An = U, Np, or Pu).

    PubMed

    Serezhkin, Viktor N; Grigoriev, Mikhail S; Abdulmyanov, Aleksey R; Fedoseev, Aleksandr M; Savchenkov, Anton V; Serezhkina, Larisa B

    2016-08-01

    Synthesis and X-ray crystallography of single crystals of [Mg(H2O)6][AnO2(C2H5COO)3]2, where An = U (I), Np (II), or Pu (III), are reported. Compounds I-III are isostructural and crystallize in the trigonal crystal system. The structures of I-III are built of hydrated magnesium cations [Mg(H2O)6](2+) and mononuclear [AnO2(C2H5COO)3](-) complexes, which belong to the AB(01)3 crystallochemical group of uranyl complexes (A = AnO2(2+), B(01) = C2H5COO(-)). Peculiarities of intermolecular interactions in the structures of [Mg(H2O)6][UO2(L)3]2 complexes depending on the carboxylate ion L (acetate, propionate, or n-butyrate) are investigated using the method of molecular Voronoi-Dirichlet polyhedra. Actinide contraction in the series of U(VI)-Np(VI)-Pu(VI) in compounds I-III is reflected in a decrease in the mean An═O bond lengths and in the volume and sphericity degree of Voronoi-Dirichlet polyhedra of An atoms.

  5. Application of the perfectly matched layer in 2.5D marine controlled-source electromagnetic modeling

    NASA Astrophysics Data System (ADS)

    Li, Gang; Han, Bo

    2017-09-01

    For the traditional framework of EM modeling algorithms, the Dirichlet boundary is usually used which assumes the field values are zero at the boundaries. This crude condition requires that the boundaries should be sufficiently far away from the area of interest. Although cell sizes could become larger toward the boundaries as electromagnetic wave is propagated diffusively, a large modeling area may still be necessary to mitigate the boundary artifacts. In this paper, the complex frequency-shifted perfectly matched layer (CFS-PML) in stretching Cartesian coordinates is successfully applied to 2.5D frequency-domain marine controlled-source electromagnetic (CSEM) field modeling. By using this PML boundary, one can restrict the modeling area of interest to the target region. Only a few absorbing layers surrounding the computational area can effectively depress the artificial boundary effect without losing the numerical accuracy. A 2.5D marine CSEM modeling scheme with the CFS-PML is developed by using the staggered finite-difference discretization. This modeling algorithm using the CFS-PML is of high accuracy, and shows advantages in computational time and memory saving than that using the Dirichlet boundary. For 3D problem, this computation time and memory saving should be more significant.

  6. The spectra of rectangular lattices of quantum waveguides

    NASA Astrophysics Data System (ADS)

    Nazarov, S. A.

    2017-02-01

    We obtain asymptotic formulae for the spectral segments of a thin (h\\ll 1) rectangular lattice of quantum waveguides which is described by a Dirichlet problem for the Laplacian. We establish that the structure of the spectrum of the lattice is incorrectly described by the commonly accepted quantum graph model with the traditional Kirchhoff conditions at the vertices. It turns out that the lengths of the spectral segments are infinitesimals of order O(e-δ/h), δ> 0, and O(h) as h\\to+0, and gaps of width O(h-2) and O(1) arise between them in the low- frequency and middle- frequency spectral ranges respectively. The first spectral segment is generated by the (unique) eigenvalue in the discrete spectrum of an infinite cross-shaped waveguide \\Theta. The absence of bounded solutions of the problem in \\Theta at the threshold frequency means that the correct model of the lattice is a graph with Dirichlet conditions at the vertices which splits into two infinite subsets of identical edges- intervals. By using perturbations of finitely many joints, we construct any given number of discrete spectrum points of the lattice below the essential spectrum as well as inside the gaps.

  7. A new analytical solution solved by triple series equations method for constant-head tests in confined aquifers

    NASA Astrophysics Data System (ADS)

    Chang, Ya-Chi; Yeh, Hund-Der

    2010-06-01

    The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.

  8. Extending information retrieval methods to personalized genomic-based studies of disease.

    PubMed

    Ye, Shuyun; Dawson, John A; Kendziorski, Christina

    2014-01-01

    Genomic-based studies of disease now involve diverse types of data collected on large groups of patients. A major challenge facing statistical scientists is how best to combine the data, extract important features, and comprehensively characterize the ways in which they affect an individual's disease course and likelihood of response to treatment. We have developed a survival-supervised latent Dirichlet allocation (survLDA) modeling framework to address these challenges. Latent Dirichlet allocation (LDA) models have proven extremely effective at identifying themes common across large collections of text, but applications to genomics have been limited. Our framework extends LDA to the genome by considering each patient as a "document" with "text" detailing his/her clinical events and genomic state. We then further extend the framework to allow for supervision by a time-to-event response. The model enables the efficient identification of collections of clinical and genomic features that co-occur within patient subgroups, and then characterizes each patient by those features. An application of survLDA to The Cancer Genome Atlas ovarian project identifies informative patient subgroups showing differential response to treatment, and validation in an independent cohort demonstrates the potential for patient-specific inference.

  9. Honeybees Learn Odour Mixtures via a Selection of Key Odorants

    PubMed Central

    Reinhard, Judith; Sinclair, Michael; Srinivasan, Mandyam V.; Claudianos, Charles

    2010-01-01

    Background The honeybee has to detect, process and learn numerous complex odours from her natural environment on a daily basis. Most of these odours are floral scents, which are mixtures of dozens of different odorants. To date, it is still unclear how the bee brain unravels the complex information contained in scent mixtures. Methodology/Principal Findings This study investigates learning of complex odour mixtures in honeybees using a simple olfactory conditioning procedure, the Proboscis-Extension-Reflex (PER) paradigm. Restrained honeybees were trained to three scent mixtures composed of 14 floral odorants each, and then tested with the individual odorants of each mixture. Bees did not respond to all odorants of a mixture equally: They responded well to a selection of key odorants, which were unique for each of the three scent mixtures. Bees showed less or very little response to the other odorants of the mixtures. The bees' response to mixtures composed of only the key odorants was as good as to the original mixtures of 14 odorants. A mixture composed of the other, non-key-odorants elicited a significantly lower response. Neither an odorant's volatility or molecular structure, nor learning efficiencies for individual odorants affected whether an odorant became a key odorant for a particular mixture. Odorant concentration had a positive effect, with odorants at high concentration likely to become key odorants. Conclusions/Significance Our study suggests that the brain processes complex scent mixtures by predominantly learning information from selected key odorants. Our observations on key odorant learning lend significant support to previous work on olfactory learning and mixture processing in honeybees. PMID:20161714

  10. Simultaneous resonant enhanced multiphoton ionization and electron avalanche ionization in gas mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shneider, Mikhail N.; Zhang Zhili; Miles, Richard B.

    2008-07-15

    Resonant enhanced multiphoton ionization (REMPI) and electron avalanche ionization (EAI) are measured simultaneously in Ar:Xe mixtures at different partial pressures of mixture components. A simple theory for combined REMPI+EAI in gas mixture is developed. It is shown that the REMPI electrons seed the avalanche process, and thus the avalanche process amplifies the REMPI signal. Possible applications are discussed.

  11. 3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.

    PubMed

    Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin

    2012-07-01

    Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.

  12. What impact do assumptions about missing data have on conclusions? A practical sensitivity analysis for a cancer survival registry.

    PubMed

    Smuk, M; Carpenter, J R; Morris, T P

    2017-02-06

    Within epidemiological and clinical research, missing data are a common issue and often over looked in publications. When the issue of missing observations is addressed it is usually assumed that the missing data are 'missing at random' (MAR). This assumption should be checked for plausibility, however it is untestable, thus inferences should be assessed for robustness to departures from missing at random. We highlight the method of pattern mixture sensitivity analysis after multiple imputation using colorectal cancer data as an example. We focus on the Dukes' stage variable which has the highest proportion of missing observations. First, we find the probability of being in each Dukes' stage given the MAR imputed dataset. We use these probabilities in a questionnaire to elicit prior beliefs from experts on what they believe the probability would be in the missing data. The questionnaire responses are then used in a Dirichlet draw to create a Bayesian 'missing not at random' (MNAR) prior to impute the missing observations. The model of interest is applied and inferences are compared to those from the MAR imputed data. The inferences were largely insensitive to departure from MAR. Inferences under MNAR suggested a smaller association between Dukes' stage and death, though the association remained positive and with similarly low p values. We conclude by discussing the positives and negatives of our method and highlight the importance of making people aware of the need to test the MAR assumption.

  13. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data.

    PubMed

    van Maanen, Leendert; Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of "mixtures" has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied-for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes.

  14. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  15. 76 FR 31824 - Chemical Mixtures Containing Listed Forms of Phosphorus and Change in Application Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-02

    ... 1117-AA66 Chemical Mixtures Containing Listed Forms of Phosphorus and Change in Application Process... establish those chemical mixtures containing red phosphorus or hypophosphorous acid and its salts (hereinafter ``regulated phosphorus'') that shall automatically qualify for exemption from the [[Page 31825...

  16. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  17. Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence

  18. On the Effective Construction of Compactly Supported Wavelets Satisfying Homogenous Boundary Conditions on the Interval

    NASA Technical Reports Server (NTRS)

    Chiavassa, G.; Liandrat, J.

    1996-01-01

    We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.

  19. Interactions between Mathematics and Physics: The History of the Concept of Function--Teaching with and about Nature of Mathematics

    ERIC Educational Resources Information Center

    Kjeldsen, Tinne Hoff; Lützen, Jesper

    2015-01-01

    In this paper, we discuss the history of the concept of function and emphasize in particular how problems in physics have led to essential changes in its definition and application in mathematical practices. Euler defined a function as an analytic expression, whereas Dirichlet defined it as a variable that depends in an arbitrary manner on another…

  20. The accurate solution of Poisson's equation by expansion in Chebyshev polynomials

    NASA Technical Reports Server (NTRS)

    Haidvogel, D. B.; Zang, T.

    1979-01-01

    A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.

  1. Manifold Matching: Joint Optimization of Fidelity and Commensurability

    DTIC Science & Technology

    2011-11-12

    identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability

  2. The tunneling effect for a class of difference operators

    NASA Astrophysics Data System (ADS)

    Klein, Markus; Rosenberger, Elke

    We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].

  3. Rabbit Neonates and Human Adults Perceive a Blending 6-Component Odor Mixture in a Comparable Manner

    PubMed Central

    Sinding, Charlotte; Thomas-Danguin, Thierry; Chambault, Adeline; Béno, Noelle; Dosne, Thibaut; Chabanet, Claire; Schaal, Benoist; Coureaud, Gérard

    2013-01-01

    Young and adult mammals are constantly exposed to chemically complex stimuli. The olfactory system allows for a dual processing of relevant information from the environment either as single odorants in mixtures (elemental perception) or as mixtures of odorants as a whole (configural perception). However, it seems that human adults have certain limits in elemental perception of odor mixtures, as suggested by their inability to identify each odorant in mixtures of more than 4 components. Here, we explored some of these limits by evaluating the perception of three 6-odorant mixtures in human adults and newborn rabbits. Using free-sorting tasks in humans, we investigated the configural or elemental perception of these mixtures, or of 5-component sub-mixtures, or of the 6-odorant mixtures with modified odorants' proportion. In rabbit pups, the perception of the same mixtures was evaluated by measuring the orocephalic sucking response to the mixtures or their components after conditioning to one of these stimuli. The results revealed that one mixture, previously shown to carry the specific odor of red cordial in humans, was indeed configurally processed in humans and in rabbits while the two other 6-component mixtures were not. Moreover, in both species, such configural perception was specific not only to the 6 odorants included in the mixture but also to their respective proportion. Interestingly, rabbit neonates also responded to each odorant after conditioning to the red cordial mixture, which demonstrates their ability to perceive elements in addition to configuration in this complex mixture. Taken together, the results provide new insights related to the processing of relatively complex odor mixtures in mammals and the inter-species conservation of certain perceptual mechanisms; the results also revealed some differences in the expression of these capacities between species putatively linked to developmental and ecological constraints. PMID:23341948

  4. PROCESS OF PRODUCING SHAPED PLUTONIUM

    DOEpatents

    Anicetti, R.J.

    1959-08-11

    A process is presented for producing and casting high purity plutonium metal in one step from plutonium tetrafluoride. The process comprises heating a mixture of the plutonium tetrafluoride with calcium while the mixture is in contact with and defined as to shape by a material obtained by firing a mixture consisting of calcium oxide and from 2 to 10% by its weight of calcium fluoride at from 1260 to 1370 deg C.

  5. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  6. Incorporating Topic Assignment Constraint and Topic Correlation Limitation into Clinical Goal Discovering for Clinical Pathway Mining.

    PubMed

    Xu, Xiao; Jin, Tao; Wei, Zhijie; Wang, Jianmin

    2017-01-01

    Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model.

  7. Incorporating Topic Assignment Constraint and Topic Correlation Limitation into Clinical Goal Discovering for Clinical Pathway Mining

    PubMed Central

    Xu, Xiao; Wei, Zhijie

    2017-01-01

    Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model. PMID:29065617

  8. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  9. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    PubMed

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Method for producing catalysis from coal

    DOEpatents

    Farcasiu, Malvina; Derbyshire, Frank; Kaufman, Phillip B.; Jagtoyen, Marit

    1998-01-01

    A method for producing catalysts from coal is provided comprising mixing an aqueous alkali solution with the coal, heating the aqueous mixture to treat the coal, drying the now-heated aqueous mixture, reheating the mixture to form carbonized material, cooling the mixture, removing excess alkali from the carbonized material, and recovering the carbonized material, wherein the entire process is carried out in controlled atmospheres, and the carbonized material is a hydrocracking or hydrodehalogenation catalyst for liquid phase reactions. The invention also provides for a one-step method for producing catalysts from coal comprising mixing an aqueous alkali solution with the coal to create a mixture, heating the aqueous mixture from an ambient temperature to a predetermined temperature at a predetermined rate, cooling the mixture, and washing the mixture to remove excess alkali from the treated and carbonized material, wherein the entire process is carried out in a controlled atmosphere.

  11. Method for producing catalysts from coal

    DOEpatents

    Farcasiu, M.; Derbyshire, F.; Kaufman, P.B.; Jagtoyen, M.

    1998-02-24

    A method for producing catalysts from coal is provided comprising mixing an aqueous alkali solution with the coal, heating the aqueous mixture to treat the coal, drying the now-heated aqueous mixture, reheating the mixture to form carbonized material, cooling the mixture, removing excess alkali from the carbonized material, and recovering the carbonized material, wherein the entire process is carried out in controlled atmospheres, and the carbonized material is a hydrocracking or hydrodehalogenation catalyst for liquid phase reactions. The invention also provides for a one-step method for producing catalysts from coal comprising mixing an aqueous alkali solution with the coal to create a mixture, heating the aqueous mixture from an ambient temperature to a predetermined temperature at a predetermined rate, cooling the mixture, and washing the mixture to remove excess alkali from the treated and carbonized material, wherein the entire process is carried out in a controlled atmosphere. 1 fig.

  12. Empirical evaluation of sufficient similarity in dose-response for environmental risk assessment of a mixture of 11 pyrethroids.

    EPA Science Inventory

    Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...

  13. Process for removing cadmium from scrap metal

    DOEpatents

    Kronberg, J.W.

    1995-04-11

    A process is described for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to expose additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal. 2 figures.

  14. Process for removing cadmium from scrap metal

    DOEpatents

    Kronberg, J.W.

    1994-01-01

    A process for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to exposure additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal.

  15. Process for removing cadmium from scrap metal

    DOEpatents

    Kronberg, James W.

    1995-01-01

    A process for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to expose additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal.

  16. Quantitative characterization of the spatial distribution of particles in materials: Application to materials processing

    NASA Technical Reports Server (NTRS)

    Parse, Joseph B.; Wert, J. A.

    1991-01-01

    Inhomogeneities in the spatial distribution of second phase particles in engineering materials are known to affect certain mechanical properties. Progress in this area has been hampered by the lack of a convenient method for quantitative description of the spatial distribution of the second phase. This study intends to develop a broadly applicable method for the quantitative analysis and description of the spatial distribution of second phase particles. The method was designed to operate on a desktop computer. The Dirichlet tessellation technique (geometrical method for dividing an area containing an array of points into a set of polygons uniquely associated with the individual particles) was selected as the basis of an analysis technique implemented on a PC. This technique is being applied to the production of Al sheet by PM processing methods; vacuum hot pressing, forging, and rolling. The effect of varying hot working parameters on the spatial distribution of aluminum oxide particles in consolidated sheet is being studied. Changes in distributions of properties such as through-thickness near-neighbor distance correlate with hot-working reduction.

  17. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  18. Regenerative process and system for the simultaneous removal of particulates and the oxides of sulfur and nitrogen from a gas stream

    DOEpatents

    Cohen, M.R.; Gal, E.

    1993-04-13

    A process and system are described for simultaneously removing from a gaseous mixture, sulfur oxides by means of a solid sulfur oxide acceptor on a porous carrier, nitrogen oxides by means of ammonia gas and particulate matter by means of filtration and for the regeneration of loaded solid sulfur oxide acceptor. Finely-divided solid sulfur oxide acceptor is entrained in a gaseous mixture to deplete sulfur oxides from the gaseous mixture, the finely-divided solid sulfur oxide acceptor being dispersed on a porous carrier material having a particle size up to about 200 microns. In the process, the gaseous mixture is optionally pre-filtered to remove particulate matter and thereafter finely-divided solid sulfur oxide acceptor is injected into the gaseous mixture.

  19. Increased yield stability of field-grown winter barley (Hordeum vulgare L.) varietal mixtures through ecological processes

    PubMed Central

    Creissen, Henry E.; Jorgensen, Tove H.; Brown, James K.M.

    2016-01-01

    Crop variety mixtures have the potential to increase yield stability in highly variable and unpredictable environments, yet knowledge of the specific mechanisms underlying enhanced yield stability has been limited. Ecological processes in genetically diverse crops were investigated by conducting field trials with winter barley varieties (Hordeum vulgare), grown as monocultures or as three-way mixtures in fungicide treated and untreated plots at three sites. Mixtures achieved yields comparable to the best performing monocultures whilst enhancing yield stability despite being subject to multiple predicted and unpredicted abiotic and biotic stresses including brown rust (Puccinia hordei) and lodging. There was compensation through competitive release because the most competitive variety overyielded in mixtures thereby compensating for less competitive varieties. Facilitation was also identified as an important ecological process within mixtures by reducing lodging. This study indicates that crop varietal mixtures have the capacity to stabilise productivity even when environmental conditions and stresses are not predicted in advance. Varietal mixtures provide a means of increasing crop genetic diversity without the need for extensive breeding efforts. They may confer enhanced resilience to environmental stresses and thus be a desirable component of future cropping systems for sustainable arable farming. PMID:27375312

  20. Sterilization of fermentation vessels by ethanol/water mixtures

    DOEpatents

    Wyman, Charles E.

    1999-02-09

    A method for sterilizing process fermentation vessels with a concentrated alcohol and water mixture integrated in a fuel alcohol or other alcohol production facility. Hot, concentrated alcohol is drawn from a distillation or other purification stage and sprayed into the empty fermentation vessels. This sterilizing alcohol/water mixture should be of a sufficient concentration, preferably higher than 12% alcohol by volume, to be toxic to undesirable microorganisms. Following sterilization, this sterilizing alcohol/water mixture can be recovered back into the same distillation or other purification stage from which it was withdrawn. The process of this invention has its best application in, but is not limited to, batch fermentation processes, wherein the fermentation vessels must be emptied, cleaned, and sterilized following completion of each batch fermentation process.

  1. Processes of Heat Transfer in Rheologically Unstable Mixtures of Organic Origin

    NASA Astrophysics Data System (ADS)

    Tkachenko, S. I.; Pishenina, N. V.; Rumyantseva, T. Yu.

    2014-05-01

    The dependence of the coefficient of heat transfer from the heat-exchange surface to a rheologically unstable organic mixture on the thermohydrodynamic state of the mixture and its prehistory has been established. A method for multivariant investigation of the process of heat transfer in compound organic mixtures has been proposed; this method makes it possible to evaluate the character and peculiarities of change in the rheological structure of the mixture as functions of the thermohydrodynamic conditions of its treatment. The possibility of evaluating the intensity of heat transfer in a biotechnological system for production of energy carriers at the step of its designing by multivariant investigation of the heat-transfer intensity in rheologically unstable organic mixtures with account of their prehistory has been shown.

  2. Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms

  3. Parameter Estimation for the Dirichlet-Multinomial Distribution Using Supplementary Beta-Binomial Data.

    DTIC Science & Technology

    1987-07-01

    multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 𔃾 4’.) ~a 4’ ., 𔃾. ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4

  4. Multispike solutions for the Brezis-Nirenberg problem in dimension three

    NASA Astrophysics Data System (ADS)

    Musso, Monica; Salazar, Dora

    2018-06-01

    We consider the problem Δu + λu +u5 = 0, u > 0, in a smooth bounded domain Ω in R3, under zero Dirichlet boundary conditions. We obtain solutions to this problem exhibiting multiple bubbling behavior at k different points of the domain as λ tends to a special positive value λ0, which we characterize in terms of the Green function of - Δ - λ.

  5. Characterization and Modeling of Thoraco-Abdominal Response to Blast Waves. Volume 4. Biomechanical Model of Thorax Response to Blast Loading

    DTIC Science & Technology

    1985-05-01

    non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso

  6. Visibility of quantum graph spectrum from the vertices

    NASA Astrophysics Data System (ADS)

    Kühn, Christian; Rohleder, Jonathan

    2018-03-01

    We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.

  7. A nonlinear ordinary differential equation associated with the quantum sojourn time

    NASA Astrophysics Data System (ADS)

    Benguria, Rafael D.; Duclos, Pierre; Fernández, Claudio; Sing-Long, Carlos

    2010-11-01

    We study a nonlinear ordinary differential equation on the half-line, with the Dirichlet boundary condition at the origin. This equation arises when studying the local maxima of the sojourn time for a free quantum particle whose states belong to an adequate subspace of the unit sphere of the corresponding Hilbert space. We establish several results concerning the existence and asymptotic behavior of the solutions.

  8. Mappings of Least Dirichlet Energy and their Hopf Differentials

    NASA Astrophysics Data System (ADS)

    Iwaniec, Tadeusz; Onninen, Jani

    2013-08-01

    The paper is concerned with mappings {h \\colon {X}} {{begin{array}{ll} onto \\ longrightarrow }} {{Y}} between planar domains having least Dirichlet energy. The existence and uniqueness (up to a conformal change of variables in {{X}}) of the energy-minimal mappings is established within the class {overline{fancyscript{H}}_2({X}, {Y})} of strong limits of homeomorphisms in the Sobolev space {fancyscript{W}^{1,2}({X}, {Y})} , a result of considerable interest in the mathematical models of nonlinear elasticity. The inner variation of the independent variable in {{X}} leads to the Hopf differential {hz overline{h_{bar{z}}} dz ⊗ dz} and its trajectories. For a pair of doubly connected domains, in which {{X}} has finite conformal modulus, we establish the following principle: A mapping {h in overline{fancyscript{H}}2 ({X}, {Y})} is energy-minimal if and only if its Hopf-differential is analytic in {{X}} and real along {partial {X}} . In general, the energy-minimal mappings may not be injective, in which case one observes the occurrence of slits in {{X}} (cognate with cracks). Slits are triggered by points of concavity of {{Y}} . They originate from {partial {X}} and advance along vertical trajectories of the Hopf differential toward {{X}} where they eventually terminate, so no crosscuts are created.

  9. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  10. Process for forming shaped group III-V semiconductor nanocrystals, and product formed using process

    DOEpatents

    Alivisatos, A. Paul; Peng, Xiaogang; Manna, Liberato

    2001-01-01

    A process for the formation of shaped Group III-V semiconductor nanocrystals comprises contacting the semiconductor nanocrystal precursors with a liquid media comprising a binary mixture of phosphorus-containing organic surfactants capable of promoting the growth of either spherical semiconductor nanocrystals or rod-like semiconductor nanocrystals, whereby the shape of the semiconductor nanocrystals formed in said binary mixture of surfactants is controlled by adjusting the ratio of the surfactants in the binary mixture.

  11. Process for forming shaped group II-VI semiconductor nanocrystals, and product formed using process

    DOEpatents

    Alivisatos, A. Paul; Peng, Xiaogang; Manna, Liberato

    2001-01-01

    A process for the formation of shaped Group II-VI semiconductor nanocrystals comprises contacting the semiconductor nanocrystal precursors with a liquid media comprising a binary mixture of phosphorus-containing organic surfactants capable of promoting the growth of either spherical semiconductor nanocrystals or rod-like semiconductor nanocrystals, whereby the shape of the semiconductor nanocrystals formed in said binary mixture of surfactants is controlled by adjusting the ratio of the surfactants in the binary mixture.

  12. Central Composite Design (CCD) applied for statistical optimization of glucose and sucrose binary carbon mixture in enhancing the denitrification process

    NASA Astrophysics Data System (ADS)

    Lim, Jun-Wei; Beh, Hoe-Guan; Ching, Dennis Ling Chuan; Ho, Yeek-Chia; Baloo, Lavania; Bashir, Mohammed J. K.; Wee, Seng-Kew

    2017-11-01

    The present study provides an insight into the optimization of a glucose and sucrose mixture to enhance the denitrification process. Central Composite Design was applied to design the batch experiments with the factors of glucose and sucrose measured as carbon-to-nitrogen (C:N) ratio each and the response of percentage removal of nitrate-nitrogen (NO3 --N). Results showed that the polynomial regression model of NO3 --N removal had been successfully derived, capable of describing the interactive relationships of glucose and sucrose mixture that influenced the denitrification process. Furthermore, the presence of glucose was noticed to have more consequential effect on NO3 --N removal as opposed to sucrose. The optimum carbon sources mixture to achieve complete removal of NO3 --N required lesser glucose (C:N ratio of 1.0:1.0) than sucrose (C:N ratio of 2.4:1.0). At the optimum glucose and sucrose mixture, the activated sludge showed faster acclimation towards glucose used to perform the denitrification process. Later upon the acclimation with sucrose, the glucose uptake rate by the activated sludge abated. Therefore, it is vital to optimize the added carbon sources mixture to ensure the rapid and complete removal of NO3 --N via the denitrification process.

  13. Process and apparatus for igniting a burner in an inert atmosphere

    DOEpatents

    Coolidge, Dennis W.; Rinker, Franklin G.

    1994-01-01

    According to this invention there is provided a process and apparatus for the ignition of a pilot burner in an inert atmosphere without substantially contaminating the inert atmosphere. The process includes the steps of providing a controlled amount of combustion air for a predetermined interval of time to the combustor then substantially simultaneously providing a controlled mixture of fuel and air to the pilot burner and to a flame generator. The controlled mixture of fuel and air to the flame generator is then periodically energized to produce a secondary flame. With the secondary flame the controlled mixture of fuel and air to the pilot burner and the combustion air is ignited to produce a pilot burner flame. The pilot burner flame is then used to ignited a mixture of main fuel and combustion air to produce a main burner flame. The main burner flame then is used to ignite a mixture of process derived fuel and combustion air to produce products of combustion for use as an inert gas in a heat treatment process.

  14. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data

    PubMed Central

    Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of “mixtures” has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied–for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes. PMID:27893868

  15. Sterilization of fermentation vessels by ethanol/water mixtures

    DOEpatents

    Wyman, C.E.

    1999-02-09

    A method is described for sterilizing process fermentation vessels with a concentrated alcohol and water mixture integrated in a fuel alcohol or other alcohol production facility. Hot, concentrated alcohol is drawn from a distillation or other purification stage and sprayed into the empty fermentation vessels. This sterilizing alcohol/water mixture should be of a sufficient concentration, preferably higher than 12% alcohol by volume, to be toxic to undesirable microorganisms. Following sterilization, this sterilizing alcohol/water mixture can be recovered back into the same distillation or other purification stage from which it was withdrawn. The process of this invention has its best application in, but is not limited to, batch fermentation processes, wherein the fermentation vessels must be emptied, cleaned, and sterilized following completion of each batch fermentation process. 2 figs.

  16. Separation of organic azeotropic mixtures by pervaporation. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, R.W.

    1991-12-01

    Distillation is a commonly used separation technique in the petroleum refining and chemical processing industries. However, there are a number of potential separations involving azetropic and close-boiling organic mixtures that cannot be separated efficiently by distillation. Pervaporation is a membrane-based process that uses selective permeation through membranes to separate liquid mixtures. Because the separation process is not affected by the relative volatility of the mixture components being separated, pervaporation can be used to separate azetropes and close-boiling mixtures. Our results showed that pervaporation membranes can be used to separate azeotropic mixtures efficiently, a result that is not achievable with simplemore » distillation. The membranes were 5--10 times more permeable to one of the components of the mixture, concentrating it in the permeate stream. For example, the membrane was 10 times more permeable to ethanol than methyl ethyl ketone, producing 60% ethanol permeate from an azeotropic mixture of ethanol and methyl ethyl ketone containing 18% ethanol. For the ethyl acetate/water mixture, the membranes showed a very high selectivity to water (> 300) and the permeate was 50--100 times enriched in water relative to the feed. The membranes had permeate fluxes on the order of 0.1--1 kg/m{sup 2}{center_dot}h in the operating range of 55--70{degrees}C. Higher fluxes were obtained by increasing the operating temperature.« less

  17. Separation of organic azeotropic mixtures by pervaporation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, R.W.

    1991-12-01

    Distillation is a commonly used separation technique in the petroleum refining and chemical processing industries. However, there are a number of potential separations involving azetropic and close-boiling organic mixtures that cannot be separated efficiently by distillation. Pervaporation is a membrane-based process that uses selective permeation through membranes to separate liquid mixtures. Because the separation process is not affected by the relative volatility of the mixture components being separated, pervaporation can be used to separate azetropes and close-boiling mixtures. Our results showed that pervaporation membranes can be used to separate azeotropic mixtures efficiently, a result that is not achievable with simplemore » distillation. The membranes were 5--10 times more permeable to one of the components of the mixture, concentrating it in the permeate stream. For example, the membrane was 10 times more permeable to ethanol than methyl ethyl ketone, producing 60% ethanol permeate from an azeotropic mixture of ethanol and methyl ethyl ketone containing 18% ethanol. For the ethyl acetate/water mixture, the membranes showed a very high selectivity to water (> 300) and the permeate was 50--100 times enriched in water relative to the feed. The membranes had permeate fluxes on the order of 0.1--1 kg/m{sup 2}{center dot}h in the operating range of 55--70{degrees}C. Higher fluxes were obtained by increasing the operating temperature.« less

  18. Nitrification during extended co-composting of extreme mixtures of green waste and solid fraction of cattle slurry to obtain growing media.

    PubMed

    Cáceres, Rafaela; Coromina, Narcís; Malińska, Krystyna; Martínez-Farré, F Xavier; López, Marga; Soliva, Montserrat; Marfà, Oriol

    2016-12-01

    Next generation of waste management systems should apply product-oriented bioconversion processes that produce composts or biofertilisers of desired quality that can be sold in high priced markets such as horticulture. Natural acidification linked to nitrification can be promoted during composting. If nitrification is enhanced, suitable compost in terms of pH can be obtained for use in horticultural substrates. Green waste compost (GW) represents a potential suitable product for use in growing medium mixtures. However its low N provides very limited slow-release nitrogen fertilization for suitable plant growth; and GW should be composted with a complementary N-rich raw material such as the solid fraction of cattle slurry (SFCS). Therefore, it is important to determine how very different or extreme proportions of the two materials in the mixture can limit or otherwise affect the nitrification process. The objectives of this work were two-fold: (a) To assess the changes in chemical and physicochemical parameters during the prolonged composting of extreme mixtures of green waste (GW) and separated cattle slurry (SFCS) and the feasibility of using the composts as growing media. (b) To check for nitrification during composting in two different extreme mixtures of GW and SFCS and to describe the conditions under which this process can be maintained and its consequences. The physical and physicochemical properties of both composts obtained indicated that they were appropriate for use as ingredients in horticultural substrates. The nitrification process occurred in both mixtures in the medium-late thermophilic stage of the composting process. In particular, its feasibility has been demonstrated in the mixtures with a low N content. Nitrification led to the inversion of each mixture's initial pH. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Investigating the mixture and subdivision of perceptual and conceptual processing in Japanese memory tests.

    PubMed

    Gabeza, R

    1995-03-01

    The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.

  20. Influence of different processing techniques on the mechanical properties of used tires in embankment construction.

    PubMed

    Edinçliler, Ayşe; Baykal, Gökhan; Saygili, Altug

    2010-06-01

    Use of the processed used tires in embankment construction is becoming an accepted way of beneficially recycling scrap tires due to shortages of natural mineral resources and increasing waste disposal costs. Using these used tires in construction requires an awareness of the properties and the limitations associated with their use. The main objective of this paper is to assess the different processing techniques on the mechanical properties of used tires-sand mixtures to improve the engineering properties of the available soil. In the first part, a literature study on the mechanical properties of the processed used tires such as tire shreds, tire chips, tire buffings and their mixtures with sand are summarized. In the second part, large-scale direct shear tests are performed to evaluate shear strength of tire crumb-sand mixtures where information is not readily available in the literature. The test results with tire crumb were compared with the other processed used tire-sand mixtures. Sand-used tire mixtures have higher shear strength than that of the sand alone and the shear strength parameters depend on the processing conditions of used tires. Three factors are found to significantly affect the mechanical properties: normal stress, processing techniques, and the used tire content. Copyright 2009. Published by Elsevier Ltd.

  1. Process for the separation of components from gas mixtures

    DOEpatents

    Merriman, J.R.; Pashley, J.H.; Stephenson, M.J.; Dunthorn, D.I.

    1973-10-01

    A process for the removal, from gaseous mixtures of a desired component selected from oxygen, iodine, methyl iodide, and lower oxides of carbon, nitrogen, and sulfur is described. The gaseous mixture is contacted with a liquid fluorocarbon in an absorption zone maintained at superatmospheric pressure to preferentially absorb the desired component in the fluorocarbon. Unabsorbed constituents of the gaseous mixture are withdrawn from the absorption zone. Liquid fluorocarbon enriched in the desired component is withdrawn separately from the zone, following which the desired component is recovered from the fluorocarbon absorbent. (Official Gazette)

  2. Comparison of Chemical Composition of Complex Disinfection Byproduct (DBP) Mixtures Produced by Different Treatment Methods - slides

    EPA Science Inventory

    Analyses of the chemical composition of complex DBP mixtures, produced by different drinking water treatment processes, are essential to generate toxicity data required for assessing their risks to humans. For mixture risk assessments, whole mixture toxicology studies generally a...

  3. Comparison of Chemical Composition of Complex Disinfection Byproduct (DBP) Mixtures Produced by Different Treatment Methods

    EPA Science Inventory

    Analyses of the chemical composition of complex DBP mixtures, produced by different drinking water treatment processes, are essential to generate toxicity data required for assessing their risks to humans. For mixture risk assessments, whole mixture toxicology studies generally a...

  4. Hydrothermal pretreatment of several lignocellulosic mixtures containing wheat straw and two hardwood residues available in Southern Europe.

    PubMed

    Silva-Fernandes, Talita; Duarte, Luís Chorão; Carvalheiro, Florbela; Loureiro-Dias, Maria Conceição; Fonseca, César; Gírio, Francisco

    2015-05-01

    This work studied the processing of biomass mixtures containing three lignocellulosic materials largely available in Southern Europe, eucalyptus residues (ER), wheat straw (WS) and olive tree pruning (OP). The mixtures were chemically characterized, and their pretreatment, by autohydrolysis, evaluated within a severity factor (logR0) ranging from 1.73 up to 4.24. A simple modeling strategy was used to optimize the autohydrolysis conditions based on the chemical characterization of the liquid fraction. The solid fraction was characterized to quantify the polysaccharide and lignin content. The pretreatment conditions for maximal saccharides recovery in the liquid fraction were at a severity range (logR0) of 3.65-3.72, independently of the mixture tested, which suggests that autohydrolysis can effectively process mixtures of lignocellulosic materials for further biochemical conversion processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    PubMed Central

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  6. The effect of feed composition on anaerobic co-digestion of animal-processing by-products.

    PubMed

    Hidalgo, D; Martín-Marroquín, J M; Corona, F

    2018-06-15

    Four streams and their mixtures have been considered for anaerobic co-digestion, all of them generated during pig carcasses processing or in related industrial activities: meat flour (MF), process water (PW), pig manure (PM) and glycerin (GL). Biochemical methane potential assays were conducted at 37 °C to evaluate the effects of the substrate mix ratio on methane generation and process behavior. The results show that the co-digestion of these products favors the anaerobic fermentation process when limiting the amount of meat flour in the mixture to co-digest, which should not exceed 10%. The ratio of other tested substrates is less critical, because different mixtures reach similar values of methane generation. The presence in the mixture of process water contributes to a quick start of the digester, something very interesting when operating an industrial reactor. The analysis of the fraction digested reveals that the four analyzed streams can be, a priori, suitable for agronomic valorization once digested. Copyright © 2017. Published by Elsevier Ltd.

  7. Separation processes using expulsion from dilute supercritical solutions

    DOEpatents

    Cochran, Jr., Henry D.

    1993-01-01

    A process for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.

  8. UTD at TREC 2014: Query Expansion for Clinical Decision Support

    DTIC Science & Technology

    2014-11-01

    Description: A 62-year-old man sees a neurologist for progressive memory loss and jerking movements of the lower ex- tremities. Neurologic examination confirms...infiltration. Summary: 62-year-old man with progressive memory loss and in- voluntary leg movements. Brain MRI reveals cortical atrophy, and cortical...latent topics produced by the Latent Dirichlet Allocation (LDA) on the TREC-CDS corpus of scientific articles. The position of words “ loss ” and “ memory

  9. Nondestructive Testing and Target Identification

    DTIC Science & Technology

    2016-12-21

    Dirichlet obstacle coated by a thin layer of non-absorbing media, IMA J. Appl. Math , 80, 1063-1098, (2015). Abstract: We consider the transmission...F. Cakoni, I. De Teresa, H. Haddar and P. Monk, Nondestructive testing of the delami- nated interface between two materials, SIAM J. Appl. Math ., 76...then they form a discrete set. 22. F. Cakoni, D. Colton, S. Meng and P. Monk, Steklov eigenvalues in inverse scattering, SIAM J. Appl. Math . 76, 1737

  10. Single-grid spectral collocation for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Canuto, Claudio; Maday, Yvon; Metivet, Brigitte

    1988-01-01

    The aim of the paper is to study a collocation spectral method to approximate the Navier-Stokes equations: only one grid is used, which is built from the nodes of a Gauss-Lobatto quadrature formula, either of Legendre or of Chebyshev type. The convergence is proven for the Stokes problem provided with inhomogeneous Dirichlet conditions, then thoroughly analyzed for the Navier-Stokes equations. The practical implementation algorithm is presented, together with numerical results.

  11. The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval

    DTIC Science & Technology

    2006-07-01

    reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We

  12. Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies

    DTIC Science & Technology

    2010-03-01

    Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical

  13. REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures

    National Institute of Standards and Technology Data Gateway

    SRD 73 NIST REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures (PC database for purchase)   REFLEAK estimates composition changes of zeotropic mixtures in leak and recharge processes.

  14. Differential gene expression pattern in human mammary epithelial cells induced by realistic organochlorine mixtures described in healthy women and in women diagnosed with breast cancer.

    PubMed

    Rivero, Javier; Henríquez-Hernández, Luis Alberto; Luzardo, Octavio P; Pestano, José; Zumbado, Manuel; Boada, Luis D; Valerón, Pilar F

    2016-03-30

    Organochlorine pesticides (OCs) have been associated with breast cancer development and progression, but the mechanisms underlying this phenomenon are not well known. In this work, we evaluated the effects exerted on normal human mammary epithelial cells (HMEC) by the OC mixtures most frequently detected in healthy women (H-mixture) and in women diagnosed with breast cancer (BC-mixture), as identified in a previous case-control study developed in Spain. Cytotoxicity and gene expression profile of human kinases (n=68) and non-kinases (n=26) were tested at concentrations similar to those described in the serum of those cases and controls. Although both mixtures caused a down-regulation of genes involved in the ATP binding process, our results clearly indicate that both mixtures may exert a very different effect on the gene expression profile of HMEC. Thus, while BC-mixture up-regulated the expression of oncogenes associated to breast cancer (GFRA1 and BHLHB8), the H-mixture down-regulated the expression of tumor suppressor genes (EPHA4 and EPHB2). Our results indicate that the composition of the OC mixture could play a role in the initiation processes of breast cancer. In addition, the present results suggest that subtle changes in the composition and levels of pollutants involved in environmentally relevant mixtures might induce very different biological effects, which explain, at least partially, why some mixtures seem to be more carcinogenic than others. Nonetheless, our findings confirm that environmentally relevant pollutants may modulate the expression of genes closely related to carcinogenic processes in the breast, reinforcing the role exerted by environment in the regulation of genes involved in breast carcinogenesis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Dynamics and associations of microbial community types across the human body.

    PubMed

    Ding, Tao; Schloss, Patrick D

    2014-05-15

    A primary goal of the Human Microbiome Project (HMP) was to provide a reference collection of 16S ribosomal RNA gene sequences collected from sites across the human body that would allow microbiologists to better associate changes in the microbiome with changes in health. The HMP Consortium has reported the structure and function of the human microbiome in 300 healthy adults at 18 body sites from a single time point. Using additional data collected over the course of 12-18 months, we used Dirichlet multinomial mixture models to partition the data into community types for each body site and made three important observations. First, there were strong associations between whether individuals had been breastfed as an infant, their gender, and their level of education with their community types at several body sites. Second, although the specific taxonomic compositions of the oral and gut microbiomes were different, the community types observed at these sites were predictive of each other. Finally, over the course of the sampling period, the community types from sites within the oral cavity were the least stable, whereas those in the vagina and gut were the most stable. Our results demonstrate that even with the considerable intra- and interpersonal variation in the human microbiome, this variation can be partitioned into community types that are predictive of each other and are probably the result of life-history characteristics. Understanding the diversity of community types and the mechanisms that result in an individual having a particular type or changing types, will allow us to use their community types to assess disease risk and to personalize therapies.

  16. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  17. Process monitored spectrophotometric titration coupled with chemometrics for simultaneous determination of mixtures of weak acids.

    PubMed

    Liao, Lifu; Yang, Jing; Yuan, Jintao

    2007-05-15

    A new spectrophotometric titration method coupled with chemometrics for the simultaneous determination of mixtures of weak acids has been developed. In this method, the titrant is a mixture of sodium hydroxide and an acid-base indicator, and the indicator is used to monitor the titration process. In a process of titration, both the added volume of titrant and the solution acidity at each titration point can be obtained simultaneously from an absorption spectrum by least square algorithm, and then the concentration of each component in the mixture can be obtained from the titration curves by principal component regression. The method only needs the information of absorbance spectra to obtain the analytical results, and is free of volumetric measurements. The analyses are independent of titration end point and do not need the accurate values of dissociation constants of the indicator and the acids. The method has been applied to the simultaneous determination of the mixtures of benzoic acid and salicylic acid, and the mixtures of phenol, o-chlorophenol and p-chlorophenol with satisfactory results.

  18. Separation processes using expulsion from dilute supercritical solutions

    DOEpatents

    Cochran, H.D. Jr.

    1993-04-20

    A process is described for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.

  19. Phenol removal pretreatment process

    DOEpatents

    Hames, Bonnie R.

    2004-04-13

    A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.

  20. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  1. Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers

    NASA Astrophysics Data System (ADS)

    Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly

    2018-03-01

    The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.

  2. Parameters of Solidifying Mixtures Transporting at Underground Ore Mining

    NASA Astrophysics Data System (ADS)

    Golik, Vladimir; Dmitrak, Yury

    2017-11-01

    The article is devoted to the problem of providing mining enterprises with solidifying filling mixtures at underground mining. The results of analytical studies using the data of foreign and domestic practice of solidifying mixtures delivery to stopes are given. On the basis of experimental practice the parameters of transportation of solidifying filling mixtures are given with an increase in their quality due to the effect of vibration in the pipeline. The mechanism of the delivery process and the procedure for determining the parameters of the forced oscillations of the pipeline, the characteristics of the transporting processes, the rigidity of the elastic elements of pipeline section supports and the magnitude of vibrator' driving force are detailed. It is determined that the quality of solidifying filling mixtures can be increased due to the rational use of technical resources during the transportation of mixtures, and as a result the mixtures are characterized by a more even distribution of the aggregate. The algorithm for calculating the parameters of the pipe vibro-transport of solidifying filling mixtures can be in demand in the design of mineral deposits underground mining technology.

  3. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  4. Prediction of the properties anhydrite construction mixtures based on neural network approach

    NASA Astrophysics Data System (ADS)

    Fedorchuk, Y. M.; Zamyatin, N. V.; Smirnov, G. V.; Rusina, O. N.; Sadenova, M. A.

    2017-08-01

    The article considered the question of applying the backstop modeling mechanism from the components of anhydride mixtures in the process of managing the technological processes of receiving construction products which based on fluoranhydrite.

  5. Mesoporous metal oxides and processes for preparation thereof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suib, Steven L.; Poyraz, Altug Suleyman

    A process for preparing a mesoporous metal oxide, i.e., transition metal oxide. Lanthanide metal oxide, a post-transition metal oxide and metalloid oxide. The process comprises providing an acidic mixture comprising a metal precursor, an interface modifier, a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to form the mesoporous metal oxide. A mesoporous metal oxide prepared by the above process. A method of controlling nano-sized wall crystallinity and mesoporosity in mesoporous metal oxides. The method comprises providing an acidic mixture comprising a metal precursor, an interface modifier,more » a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to control nano-sized wall crystallinity and mesoporosity in the mesoporous metal oxides. Mesoporous metal oxides and a method of tuning structural properties of mesoporous metal oxides.« less

  6. A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2010-08-01

    A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.

  7. Clinical progress of human papillomavirus genotypes and their persistent infection in subjects with atypical squamous cells of undetermined significance cytology: Statistical and latent Dirichlet allocation analysis

    PubMed Central

    Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin

    2017-01-01

    The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (P<0.001), while no significant association was identified for other genotypes in terms of genotype and clinical progress. There was a significant association between clearance and good prognosis (P<0.001). Persistent infection was higher in patients aged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; P<0.001). In the LDA analysis, using symmetric Dirichlet priors α=0.1 and β=0.01, and clusters (k)=5 or 10 provided the most meaningful groupings. Statistical and LDA analyses produced consistent results regarding the association between persistent infection of HPV-16, old age and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376

  8. Evaluation of Skid Resistance of Wearing Course Made Of Stone Mastic Asphalt Mixture in Laboratory Conditions

    NASA Astrophysics Data System (ADS)

    Wasilewska, Marta

    2017-10-01

    This paper presents the comparison of skid resistance of wearing course made of SMA (Stone Mastic Asphalt) mixtures which differ in resistance to polishing of coarse aggregate. Dolomite, limestone, granite and trachybasalt were taken for investigation. SMA mixtures have the same nominal size of aggregate (11 mm) and very similar aggregate particle-size distribution in mineral mixtures. Tested SMA11 mixtures were designed according to EN 13108-5 and Polish National Specification WT-2: 2014. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of mixtures under specified conditions simulating polishing processes. Tests were performed on both the specimens made of each coarse aggregate and SMA11 mixtures containing these aggregates. Measuring of friction coefficient μm was conducted before and during polishing process up to 180 0000 passes of polishing head. Comparison of the results showed differences in sensitivity to polishing among particular mixtures which depend on the petrographic properties of rock used to produce aggregate. Limestone and dolomite tend to have a fairly uniform texture with low hardness which makes these rock types susceptible to rapid polishing. This caused lower coefficient of friction for SMA11 mixtures with limestone and dolomite in comparison with other test mixtures. These significant differences were already registered at the beginning of the polishing process. Limestone aggregate had lower value of μm before starting the process than trachybasalt and granite aggregate after its completion. Despite the differences in structure and mineralogical composition between the granite and trachybasalt, slightly different values of the friction coefficient at the end of polishing were obtained. Images of the surface were taken with the optical microscope for better understanding of the phenomena occurring on the surface of specimen. Results may be valuable information when selecting aggregate to asphalt mixtures at the stage of its design and maintenance of existing road pavements.

  9. Fingerprinting selection for agroenvironmental catchment studies: EDXRF analysis for solving complex artificial mixtures

    NASA Astrophysics Data System (ADS)

    Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel

    2017-04-01

    Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.

  10. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  11. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  12. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  13. Simplex-centroid mixture formulation for optimised composting of kitchen waste.

    PubMed

    Abdullah, N; Chin, N L

    2010-11-01

    Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Prediction of vapour-liquid and vapour-liquid-liquid equilibria of nitrogen-hydrocarbon mixtures used in J-T refrigerators

    NASA Astrophysics Data System (ADS)

    Narayanan, Vineed; Venkatarathnam, G.

    2018-03-01

    Nitrogen-hydrocarbon mixtures are widely used as refrigerants in J-T refrigerators operating with mixtures, as well as in natural gas liquefiers. The Peng-Robinson equation of state has traditionally been used to simulate the above cryogenic process. Multi parameter Helmholtz energy equations are now preferred for determining the properties of natural gas. They have, however, been used only to predict vapour-liquid equilibria, and not vapour-liquid-liquid equilibria that can occur in mixtures used in cryogenic mixed refrigerant processes. In this paper the vapour-liquid equilibrium of binary mixtures of nitrogen-methane, nitrogen-ethane, nitrogen-propane, nitrogen-isobutane and three component mixtures of nitrogen-methane-ethane and nitrogen-methane-propane have been studied with the Peng-Robinson and the Helmholtz energy equations of state of NIST REFPROP and compared with experimental data available in the literature.

  15. Plutonium dissolution process

    DOEpatents

    Vest, Michael A.; Fink, Samuel D.; Karraker, David G.; Moore, Edwin N.; Holcomb, H. Perry

    1996-01-01

    A two-step process for dissolving plutonium metal, which two steps can be carried out sequentially or simultaneously. Plutonium metal is exposed to a first mixture containing approximately 1.0M-1.67M sulfamic acid and 0.0025M-0.1M fluoride, the mixture having been heated to a temperature between 45.degree. C. and 70.degree. C. The mixture will dissolve a first portion of the plutonium metal but leave a portion of the plutonium in an oxide residue. Then, a mineral acid and additional fluoride are added to dissolve the residue. Alteratively, nitric acid in a concentration between approximately 0.05M and 0.067M is added to the first mixture to dissolve the residue as it is produced. Hydrogen released during the dissolution process is diluted with nitrogen.

  16. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    DTIC Science & Technology

    2014-09-01

    Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies

  17. Implementation of Nonhomogeneous Dirichlet Boundary Conditions in the p- Version of the Finite Element Method

    DTIC Science & Technology

    1988-09-01

    Institute for Physical Science and Teennology rUniversity of Maryland o College Park, MD 20742 B. Gix) Engineering Mechanics Research Corporation Troy...OF THE FINITE ELEMENT METHOD by Ivo Babuska Institute for Physical Science and Technology University of Maryland College Park, MD 20742 B. Guo 2...2Research partially supported by the National Science Foundation under Grant DMS-85-16191 during the stay at the Institute for Physical Science and

  18. Lifshits Tails for Randomly Twisted Quantum Waveguides

    NASA Astrophysics Data System (ADS)

    Kirsch, Werner; Krejčiřík, David; Raikov, Georgi

    2018-03-01

    We consider the Dirichlet Laplacian H_γ on a 3D twisted waveguide with random Anderson-type twisting γ . We introduce the integrated density of states N_γ for the operator H_γ , and investigate the Lifshits tails of N_γ , i.e. the asymptotic behavior of N_γ (E) as E \\downarrow \\inf supp dN_γ . In particular, we study the dependence of the Lifshits exponent on the decay rate of the single-site twisting at infinity.

  19. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  20. Using phrases and document metadata to improve topic modeling of clinical reports.

    PubMed

    Speier, William; Ong, Michael K; Arnold, Corey W

    2016-06-01

    Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.

    2009-09-01

    Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.

  2. Characterization of airborne particles generated from metal active gas welding process.

    PubMed

    Guerreiro, C; Gomes, J F; Carvalho, P; Santos, T J G; Miranda, R M; Albuquerque, P

    2014-05-01

    This study is focused on the characterization of particles emitted in the metal active gas welding of carbon steel using mixture of Ar + CO2, and intends to analyze which are the main process parameters that influence the emission itself. It was found that the amount of emitted particles (measured by particle number and alveolar deposited surface area) are clearly dependent on the distance to the welding front and also on the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne fine particles seems to increase with the current intensity as fume-formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. These mixtures originate higher concentrations of fine particles (as measured by number of particles by cm(3) of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more severe worker's exposure.

  3. 7 CFR 52.3182 - Varietal types of dried prunes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1 United States Standards for Grades of Dried...; or Sugar; or a mixture of Imperial and Sugar. (d) Type IV. Any other types; or mixtures of any types...

  4. 7 CFR 52.3182 - Varietal types of dried prunes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1 United States Standards for Grades of Dried...; or Sugar; or a mixture of Imperial and Sugar. (d) Type IV. Any other types; or mixtures of any types...

  5. Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution

    PubMed Central

    2013-01-01

    Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS is a feature selection method, while recursive partitioning is a learning tree algorithm that has been used for feature selection in the past. Conclusions The proposed feature selection method performs well for both glycan chromatography datasets. It is computationally slower, but results in a lower misclassification rate and a higher sensitivity rate than both correlation-based feature selection and the classification tree method. PMID:23651459

  6. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  7. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  8. Boundary Regularity for the Porous Medium Equation

    NASA Astrophysics Data System (ADS)

    Björn, Anders; Björn, Jana; Gianazza, Ugo; Siljander, Juhana

    2018-05-01

    We study the boundary regularity of solutions to the porous medium equation {u_t = Δ u^m} in the degenerate range {m > 1} . In particular, we show that in cylinders the Dirichlet problem with positive continuous boundary data on the parabolic boundary has a solution which attains the boundary values, provided that the spatial domain satisfies the elliptic Wiener criterion. This condition is known to be optimal, and it is a consequence of our main theorem which establishes a barrier characterization of regular boundary points for general—not necessarily cylindrical—domains in {{R}^{n+1}} . One of our fundamental tools is a new strict comparison principle between sub- and superparabolic functions, which makes it essential for us to study both nonstrict and strict Perron solutions to be able to develop a fruitful boundary regularity theory. Several other comparison principles and pasting lemmas are also obtained. In the process we obtain a rather complete picture of the relation between sub/superparabolic functions and weak sub/supersolutions.

  9. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less

  10. Decoding brain activity using a large-scale probabilistic functional-anatomical atlas of human cognition

    PubMed Central

    Jones, Michael N.

    2017-01-01

    A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185

  11. Complex temporal topic evolution modelling using the Kullback-Leibler divergence and the Bhattacharyya distance.

    PubMed

    Andrei, Victor; Arandjelović, Ognjen

    2016-12-01

    The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora and of tracking complex temporal changes within it. Our framework is based on (i) the discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes. More specifically, this is the first work that discusses and distinguishes between two groups of particularly challenging topic evolution phenomena: topic splitting and speciation and topic convergence and merging, in addition to the more widely recognized emergence and disappearance and gradual evolution. The proposed framework is evaluated on a public medical literature corpus.

  12. Rapid Airplane Parametric Input Design(RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.

    2004-01-01

    An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.

  13. Knowledge-based probabilistic representations of branching ratios in chemical networks: the case of dissociative recombinations.

    PubMed

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  14. Theoretical aspect of suitable spatial boundary condition specified for adjoint model on limited area

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Wu, Rongsheng

    2001-12-01

    Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.

  15. Conceptual design of distillation-based hybrid separation processes.

    PubMed

    Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang

    2013-01-01

    Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.

  16. Systematic identification of latent disease-gene associations from PubMed articles.

    PubMed

    Zhang, Yuji; Shen, Feichen; Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang

    2018-01-01

    Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research.

  17. Systematic identification of latent disease-gene associations from PubMed articles

    PubMed Central

    Mojarad, Majid Rastegar; Li, Dingcheng; Liu, Sijia; Tao, Cui; Yu, Yue; Liu, Hongfang

    2018-01-01

    Recent scientific advances have accumulated a tremendous amount of biomedical knowledge providing novel insights into the relationship between molecular and cellular processes and diseases. Literature mining is one of the commonly used methods to retrieve and extract information from scientific publications for understanding these associations. However, due to large data volume and complicated associations with noises, the interpretability of such association data for semantic knowledge discovery is challenging. In this study, we describe an integrative computational framework aiming to expedite the discovery of latent disease mechanisms by dissecting 146,245 disease-gene associations from over 25 million of PubMed indexed articles. We take advantage of both Latent Dirichlet Allocation (LDA) modeling and network-based analysis for their capabilities of detecting latent associations and reducing noises for large volume data respectively. Our results demonstrate that (1) the LDA-based modeling is able to group similar diseases into disease topics; (2) the disease-specific association networks follow the scale-free network property; (3) certain subnetwork patterns were enriched in the disease-specific association networks; and (4) genes were enriched in topic-specific biological processes. Our approach offers promising opportunities for latent disease-gene knowledge discovery in biomedical research. PMID:29373609

  18. Methods and systems for deacidizing gaseous mixtures

    DOEpatents

    Hu, Liang

    2010-05-18

    An improved process for deacidizing a gaseous mixture using phase enhanced gas-liquid absorption is described. The process utilizes a multiphasic absorbent that absorbs an acid gas at increased rate and leads to reduced overall energy costs for the deacidizing operation.

  19. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  20. Lunar dust simulant containing nanophase iron and method for making the same

    NASA Technical Reports Server (NTRS)

    Hung, Chin-cheh (Inventor); McNatt, Jeremiah (Inventor)

    2012-01-01

    A lunar dust simulant containing nanophase iron and a method for making the same. Process (1) comprises a mixture of ferric chloride, fluorinated carbon powder, and glass beads, treating the mixture to produce nanophase iron, wherein the resulting lunar dust simulant contains .alpha.-iron nanoparticles, Fe.sub.2O.sub.3, and Fe.sub.3O.sub.4. Process (2) comprises a mixture of a material of mixed-metal oxides that contain iron and carbon black, treating the mixture to produce nanophase iron, wherein the resulting lunar dust simulant contains .alpha.-iron nanoparticles and Fe.sub.3O.sub.4.

  1. The influence of surface-active agents in gas mixture on the intensity of jet condensation

    NASA Astrophysics Data System (ADS)

    Yezhov, YV; Okhotin, VS

    2017-11-01

    The report presents: the methodology of calculation of contact condensation of steam from the steam-gas mixture into the stream of water, taking into account: the mass flow of steam through the boundary phase, particularly the change in turbulent transport properties near the interface and their connection to the interface perturbations due to the surface tension of the mixture; the method of calculation of the surface tension at the interface water - a mixture of fluorocarbon vapor and water, based on the previously established analytical methods we calculate the surface tension for simple one - component liquid-vapor systems. The obtained analytical relation to calculate the surface tension of the mixture is a function of temperature and volume concentration of the fluorocarbon gas in the mixture and is true for all sizes of gas molecules. On the newly created experimental stand is made verification of experimental studies to determine the surface tension of pure substances: water, steam, C3F8 pair C3F8, produced the first experimental data on surface tension at the water - a mixture of water vapor and fluorocarbon C3F8. The obtained experimental data allow us to refine the values of the two constants used in the calculated model of the surface tension of the mixture. Experimental study of jet condensation was carried out with the flow in the zone of condensation of different gases. The condensation process was monitored by measurement of consumption of water flowing from the nozzle, and the formed condensate. When submitting C3F8, there was a noticeable, intensification condensation process compared with the condensation of pure water vapor. The calculation results are in satisfactory agreement with the experimental data on surface tension of the mixture and steam condensation from steam-gas mixture. Analysis of calculation results shows that the presence of surfactants in the condensation zone affects the partial vapor pressure on the interfacial surface, and the thermal conductivity of the liquid jet. The first circumstance leads to deterioration of the condensation process, the second to the intensification of this process. There is obviously an optimum value of concentration of the additive surfactants to the vapour when the condensation process is maximum. According to the developed design methodology contact condensation can evaluate these optimum conditions, their practical effect in the field study.

  2. Characterization of concrete hardness by using sugarcane bagasse waste mixture by carbon oven curing process

    NASA Astrophysics Data System (ADS)

    Rino, Agus; Farida, Elvaswer, Dahlan, Dahyunir

    2017-01-01

    Sugarcane bagasse is one of the solid waste that can be processed as a mixture of structure materials. In the previous research, sugarcane bagasse has been processed and used as a mixture of portland cement, the manufacture of asbestos and also mixtures for manufacturing of brake pads that are frequently used in motor vehicle. Based on the previous research results, it is strongly necessary to develop a research about sugarcane bagasse for structure materials. The method used to determine the mechanical properties is tensile test and compression test. To obtain the intensity of material variations, it is needed to make various sizes of filtration carbon in the form of aggregate that is mixed into the tiles material. The size of the aggregate used in concrete material can be on 200 µm, 400 µm and 600 µm. This measure is taken in accordance with the size of the filtration apparatus. Furthermore, in obtaining sugarcane bagasse carbon, the researcher must do the oven curing process on 200 °C temperature and 3 hours in curing oven. In this research the best result is used bagasse powder mixtured 200 µm is 7. 2 MPa.

  3. PCBs: Cancer Dose-Response Assessment and Application to Environmental Mixtures (1996)

    EPA Science Inventory

    This report updates the cancer dose-response assessment for polychlorinated biphenyls (PCBs) and shows how information on toxicity, disposition, and environmental processes can be considered together to evaluate health risks from PCB mixtures in the environment. Processes that ch...

  4. Methods for deacidizing gaseous mixtures by phase enhanced absorption

    DOEpatents

    Hu, Liang

    2012-11-27

    An improved process for deacidizing a gaseous mixture using phase enhanced gas-liquid absorption is described. The process utilizes a multiphasic absorbent that absorbs an acid gas at increased rate and leads to reduced overall energy costs for the deacidizing operation.

  5. Is There Evidence for a Mixture of Processes in Speed-Accuracy Trade-Off Behavior?

    PubMed

    van Maanen, Leendert

    2016-01-01

    The speed-accuracy trade-off (SAT) effect refers to the behavioral trade-off between fast yet error-prone respones and accurate but slow responses. Multiple theories on the cognitive mechanisms behind SAT exist. One theory assumes that SAT is a consequence of strategically adjusting the amount of evidence required for overt behaviors, such as perceptual choices. Another theory hypothesizes that SAT is the consequence of the mixture of multiple categorically different cognitive processes. In this paper, these theories are disambiguated by assessing whether the fixed-point property of mixture distributions holds, in both simulations and data. I conclude that, at least for perceptual decision making, there is no evidence for a mixture of different cognitive processes to trade off accuracy of responding for speed. Copyright © 2016 Cognitive Science Society, Inc.

  6. Characterisation of aerosol combustible mixtures generated using condensation process

    NASA Astrophysics Data System (ADS)

    Saat, Aminuddin; Dutta, Nilabza; Wahid, Mazlan A.

    2012-06-01

    An accidental release of a liquid flammable substance might be formed as an aerosol (droplet and vapour mixture). This phenomenon might be due to high pressure sprays, pressurised liquid leaks and through condensation when hot vapour is rapidly cooled. Such phenomena require a fundamental investigation of mixture characterisation prior to any subsequent process such as evaporation and combustion. This paper describes characterisation study of droplet and vapour mixtures generated in a fan stirred vessel using condensation technique. Aerosol of isooctane mixtures were generated by expansion from initially a premixed gaseous fuel-air mixture. The distribution of droplets within the mixture was characterised using laser diagnostics. Nearly monosized droplet clouds were generated and the droplet diameter was defined as a function of expansion time. The effect of changes in pressure, temperature, fuel-air fraction and expansion ratio on droplet diameter was evaluated. It is shown that aerosol generation by expansion was influenced by the initial pressure and temperature, equivalence ratio and expansion rates. All these parameters affected the onset of condensation which in turn affected the variation in droplet diameter.

  7. Shock wave induced condensation in fuel-rich gaseous and gas-particles mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    The possibility of fuel vapor condensation in shock waves in fuel-rich (cyclohexane-oxygen) gaseous mixtures and explosion safety aspects of this effect are discussed. It is shown, that condensation process can essentially change the chemical composition of the gas. For example, the molar fraction of the oxidizer can increase in a few times. As a result, mixtures in which the initial concentration of fuel vapor exceeds the Upper Flammability Limit can, nevertheless, explode, if condensation shifts the composition of the mixture into the ignition region. The rate of the condensation process is estimated. This process can be fast enough to significantly change the chemical composition of the gas and shift it into the flammable range during the compression phase of blast waves, generated by explosions of fuel-vapor clouds or rapture of pressurized chemical reactors, with characteristic size of a few meters. It is shown that the presence of chemically inert microparticles in the gas mixtures under consideration increases the degree of supercooling and the mass of fuel vapors that have passed into the liquid and reduces the characteristic condensation time in comparison with the gas mixture without microparticles. The fuel vapor condensation should be taken into account in estimation the explosion hazard of chemical reactors, industrial and civil constructions, which may contain fuel-rich gaseous mixtures of heavy hydrocarbons with air.

  8. The perception of odor objects in everyday life: a review on the processing of odor mixtures

    PubMed Central

    Thomas-Danguin, Thierry; Sinding, Charlotte; Romagny, Sébastien; El Mountassir, Fouzia; Atanasova, Boriana; Le Berre, Elodie; Le Bon, Anne-Marie; Coureaud, Gérard

    2014-01-01

    Smelling monomolecular odors hardly ever occurs in everyday life, and the daily functioning of the sense of smell relies primarily on the processing of complex mixtures of volatiles that are present in the environment (e.g., emanating from food or conspecifics). Such processing allows for the instantaneous recognition and categorization of smells and also for the discrimination of odors among others to extract relevant information and to adapt efficiently in different contexts. The neurophysiological mechanisms underpinning this highly efficient analysis of complex mixtures of odorants is beginning to be unraveled and support the idea that olfaction, as vision and audition, relies on odor-objects encoding. This configural processing of odor mixtures, which is empirically subject to important applications in our societies (e.g., the art of perfumers, flavorists, and wine makers), has been scientifically studied only during the last decades. This processing depends on many individual factors, among which are the developmental stage, lifestyle, physiological and mood state, and cognitive skills; this processing also presents striking similarities between species. The present review gathers the recent findings, as observed in animals, healthy subjects, and/or individuals with affective disorders, supporting the perception of complex odor stimuli as odor objects. It also discusses peripheral to central processing, and cognitive and behavioral significance. Finally, this review highlights that the study of odor mixtures is an original window allowing for the investigation of daily olfaction and emphasizes the need for knowledge about the underlying biological processes, which appear to be crucial for our representation and adaptation to the chemical environment. PMID:24917831

  9. Functional level-set derivative for a polymer self consistent field theory Hamiltonian

    NASA Astrophysics Data System (ADS)

    Ouaknin, Gaddiel; Laachi, Nabil; Bochkov, Daniil; Delaney, Kris; Fredrickson, Glenn H.; Gibou, Frederic

    2017-09-01

    We derive functional level-set derivatives for the Hamiltonian arising in self-consistent field theory, which are required to solve free boundary problems in the self-assembly of polymeric systems such as block copolymer melts. In particular, we consider Dirichlet, Neumann and Robin boundary conditions. We provide numerical examples that illustrate how these shape derivatives can be used to find equilibrium and metastable structures of block copolymer melts with a free surface in both two and three spatial dimensions.

  10. Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet Allocation

    DTIC Science & Technology

    2013-09-01

    an image can be used to improve automated image annotation performance over existing generalized annotators. Second, image anno - 3 tations can be used...the other variables. The first ratio in the sampling Equation 2.18 uses word frequency by total words, φ̂ (w) j . The second ratio divides word...topics by total words in that document θ̂ (d) j . Both leave out the current assignment of zi and the results are used to randomly choose a new topic

  11. Time-Bound Analytic Tasks on Large Data Sets Through Dynamic Configuration of Workflows

    DTIC Science & Technology

    2013-11-01

    Assessment and Efficient Retrieval of Semantic Workflows.” Information Systems Journal, . 2012. [2] Blei, D., Ng, A., and M . Jordan. “Latent Dirichlet...25 (561-567), 2009. [5] Furlani, T. R., Jones, M . D., Gallo, S. M ., Bruno, A. E., Lu, C., Ghadersohi, A., Gentner, R. J., Patra, A., DeLeon, R. L...Proceedings of the IEEE e- Science Conference, Oxford, UK, pages 244–351. 2009. [8] Gil, Y.; Deelman, E.; Ellisman, M . H.; Fahringer, T.; Fox, G.; Gannon, D

  12. Moving finite elements in 2-D

    NASA Technical Reports Server (NTRS)

    Gelinas, R. J.; Doss, S. K.; Vajk, J. P.; Djomehri, J.; Miller, K.

    1983-01-01

    The mathematical background regarding the moving finite element (MFE) method of Miller and Miller (1981) is discussed, taking into account a general system of partial differential equations (PDE) and the amenability of the MFE method in two dimensions to code modularization and to semiautomatic user-construction of numerous PDE systems for both Dirichlet and zero-Neumann boundary conditions. A description of test problem results is presented, giving attention to aspects of single square wave propagation, and a solution of the heat equation.

  13. On the Boussinesq-Burgers equations driven by dynamic boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhu, Neng; Liu, Zhengrong; Zhao, Kun

    2018-02-01

    We study the qualitative behavior of the Boussinesq-Burgers equations on a finite interval subject to the Dirichlet type dynamic boundary conditions. Assuming H1 ×H2 initial data which are compatible with boundary conditions and utilizing energy methods, we show that under appropriate conditions on the dynamic boundary data, there exist unique global-in-time solutions to the initial-boundary value problem, and the solutions converge to the boundary data as time goes to infinity, regardless of the magnitude of the initial data.

  14. Quasi-periodic solutions of nonlinear beam equation with prescribed frequencies

    NASA Astrophysics Data System (ADS)

    Chang, Jing; Gao, Yixian; Li, Yong

    2015-05-01

    Consider the one dimensional nonlinear beam equation utt + uxxxx + mu + u3 = 0 under Dirichlet boundary conditions. We show that for any m > 0 but a set of small Lebesgue measure, the above equation admits a family of small-amplitude quasi-periodic solutions with n-dimensional Diophantine frequencies. These Diophantine frequencies are the small dilation of a prescribed Diophantine vector. The proofs are based on an infinite dimensional Kolmogorov-Arnold-Moser iteration procedure and a partial Birkhoff normal form.

  15. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  16. Optimal decay rate for the wave equation on a square with constant damping on a strip

    NASA Astrophysics Data System (ADS)

    Stahn, Reinhard

    2017-04-01

    We consider the damped wave equation with Dirichlet boundary conditions on the unit square parametrized by Cartesian coordinates x and y. We assume the damping a to be strictly positive and constant for x<σ and zero for x>σ . We prove the exact t^{-4/3}-decay rate for the energy of classical solutions. Our main result (Theorem 1) answers question (1) of Anantharaman and Léautaud (Anal PDE 7(1):159-214, 2014, Section 2C).

  17. Method for forming an abrasive surface on a tool

    DOEpatents

    Seals, Roland D.; White, Rickey L.; Swindeman, Catherine J.; Kahl, W. Keith

    1999-01-01

    A method for fabricating a tool used in cutting, grinding and machining operations, is provided. The method is used to deposit a mixture comprising an abrasive material and a bonding material on a tool surface. The materials are propelled toward the receiving surface of the tool substrate using a thermal spray process. The thermal spray process melts the bonding material portion of the mixture, but not the abrasive material. Upon impacting the tool surface, the mixture or composition solidifies to form a hard abrasive tool coating.

  18. Microbial solubilization of phosphate

    DOEpatents

    Rogers, R.D.; Wolfram, J.H.

    1993-10-26

    A process is provided for solubilizing phosphate from phosphate containing ore by treatment with microorganisms which comprises forming an aqueous mixture of phosphate ore, microorganisms operable for solubilizing phosphate from the phosphate ore and maintaining the aqueous mixture for a period of time and under conditions operable to effect the microbial solubilization process. An aqueous solution containing soluble phosphorus can be separated from the reacted mixture by precipitation, solvent extraction, selective membrane, exchange resin or gravity methods to recover phosphate from the aqueous solution. 6 figures.

  19. Microbial solubilization of phosphate

    DOEpatents

    Rogers, Robert D.; Wolfram, James H.

    1993-01-01

    A process is provided for solubilizing phosphate from phosphate containing ore by treatment with microorganisms which comprises forming an aqueous mixture of phosphate ore, microorganisms operable for solubilizing phosphate from the phosphate ore and maintaining the aqueous mixture for a period of time and under conditions operable to effect the microbial solubilization process. An aqueous solution containing soluble phosphorous can be separated from the reacted mixture by precipitation, solvent extraction, selective membrane, exchange resin or gravity methods to recover phosphate from the aqueous solution.

  20. Measurement of plasma decay processes in mixture of sodium and argon by coherent microwave scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Zhili; Shneider, Mikhail N.

    2010-03-15

    This paper presents the experimental measurement and computational model of sodium plasma decay processes in mixture of sodium and argon by using radar resonance-enhanced multiphoton ionization (REMPI), coherent microwave Rayleigh scattering of REMPI. A single laser beam resonantly ionizes the sodium atoms by means of 2+1 REMPI process. The laser beam can only generate the ionization of the sodium atoms and have negligible ionization of argon. Coherent microwave scattering in situ measures the total electron number in the laser-induced plasma. Since the sodium ions decay by recombination with electrons, microwave scattering directly measures the plasma decay processes of the sodiummore » ions. A theoretical plasma dynamic model, including REMPI of the sodium and electron avalanche ionization (EAI) of sodium and argon in the gas mixture, has been developed. It confirms that the EAI of argon is several orders of magnitude lower than the REMPI of sodium. The theoretical prediction made for the plasma decay process of sodium plasma in the mixture matches the experimental measurement.« less

  1. Improvement of In-Flight Alumina Spheroidization Process Using a Small Power Argon DC-RF Hybrid Plasma Flow System by Helium Mixture

    NASA Astrophysics Data System (ADS)

    Takana, Hidemasa; Jang, Juyong; Igawa, Junji; Nakajima, Tomoki; Solonenko, Oleg P.; Nishiyama, Hideya

    2011-03-01

    For the further improvement of in-flight alumina spheroidization process with a low-power direct-current radiofrequency (DC-RF) hybrid plasma flow system, the effect of a small amount of helium gas mixture in argon main gas and also the effect of increasing DC nozzle diameter on powder spheroidization ratio have been experimentally clarified with correlating helium gas mixture percentage, plasma enthalpy, powder in-flight velocity, and temperature. The alumina spheroidization ratio increases by helium gas mixture as a result of enhancement of plasma enthalpy. The highest spheroidization ratio is obtained by 4% mixture of helium in central gas with enlarging nozzle diameter from 3 to 4 mm, even under the constant low input electric power given to a DC-RF hybrid plasma flow system.

  2. Plutonium dissolution process

    DOEpatents

    Vest, M.A.; Fink, S.D.; Karraker, D.G.; Moore, E.N.; Holcomb, H.P.

    1994-01-01

    A two-step process for dissolving Pu metal is disclosed in which two steps can be carried out sequentially or simultaneously. Pu metal is exposed to a first mixture of 1.0-1.67 M sulfamic acid and 0.0025-0.1 M fluoride, the mixture having been heated to 45-70 C. The mixture will dissolve a first portion of the Pu metal but leave a portion of the Pu in an oxide residue. Then, a mineral acid and additional fluoride are added to dissolve the residue. Alternatively, nitric acid between 0.05 and 0.067 M is added to the first mixture to dissolve the residue as it is produced. Hydrogen released during the dissolution is diluted with nitrogen.

  3. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  4. Process for producing wurtzitic or cubic boron nitride

    DOEpatents

    Holt, J.B.; Kingman, D.D.; Bianchini, G.M.

    1992-04-28

    Disclosed is a process for producing wurtzitic or cubic boron nitride comprising the steps of: [A] preparing an intimate mixture of powdered boron oxide, a powdered metal selected from the group consisting of magnesium or aluminum, and a powdered metal azide; [B] igniting the mixture and bringing it to a temperature at which self-sustaining combustion occurs; [C] shocking the mixture at the end of the combustion thereof with a high pressure wave, thereby forming as a reaction product, wurtzitic or cubic boron nitride and occluded metal oxide; and, optionally [D] removing the occluded metal oxide from the reaction product. Also disclosed are reaction products made by the process described.

  5. Process for producing wurtzitic or cubic boron nitride

    DOEpatents

    Holt, J. Birch; Kingman, deceased, Donald D.; Bianchini, Gregory M.

    1992-01-01

    Disclosed is a process for producing wurtzitic or cubic boron nitride comprising the steps of: [A] preparing an intimate mixture of powdered boron oxide, a powdered metal selected from the group consisting of magnesium or aluminum, and a powdered metal azide; [B] igniting the mixture and bringing it to a temperature at which self-sustaining combustion occurs; [C] shocking the mixture at the end of the combustion thereof with a high pressure wave, thereby forming as a reaction product, wurtzitic or cubic boron nitride and occluded metal oxide; and, optionally [D] removing the occluded metal oxide from the reaction product. Also disclosed are reaction products made by the process described.

  6. Microlayered flow structure around an acoustically levitated droplet under a phase-change process.

    PubMed

    Hasegawa, Koji; Abe, Yutaka; Goda, Atsushi

    2016-01-01

    The acoustic levitation method (ALM) has found extensive applications in the fields of materials science, analytical chemistry, and biomedicine. This paper describes an experimental investigation of a levitated droplet in a 19.4-kHz single-axis acoustic levitator. We used water, ethanol, water/ethanol mixture, and hexane as test samples to investigate the effect of saturated vapor pressure on the flow field and evaporation process using a high-speed camera. In the case of ethanol, water/ethanol mixtures with initial ethanol fractions of 50 and 70 wt%, and hexane droplets, microlayered toroidal vortexes are generated in the vicinity of the droplet interface. Experimental results indicate the presence of two stages in the evaporation process of ethanol and binary mixture droplets for ethanol content >10%. The internal and external flow fields of the acoustically levitated droplet of pure and binary mixtures are clearly observed. The binary mixture of the levitated droplet shows the interaction between the configurations of the internal and external flow fields of the droplet and the concentration of the volatile fluid. Our findings can contribute to the further development of existing theoretical prediction.

  7. Large eddy simulation of the low temperature ignition and combustion processes on spray flame with the linear eddy model

    NASA Astrophysics Data System (ADS)

    Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn

    2018-03-01

    Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer mixture regions. And after the spray flames gets quasi-steady, most heat is released at the stoichiometric mixture fraction regions. In addition, combustion mode analysis based on key intermediate species illustrates three-mode combustion processes in diesel spray flames.

  8. 75 FR 36306 - Chemical Mixtures Containing Listed Forms of Phosphorus and Change in Application Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... 1117-AA66 Chemical Mixtures Containing Listed Forms of Phosphorus and Change in Application Process... phosphorus, white phosphorus (also known as yellow phosphorus), or hypophosphorous acid and its salts (hereinafter ``regulated phosphorus'') that shall automatically qualify for exemption from the Controlled...

  9. Cermet crucible for metallurgical processing

    DOEpatents

    Boring, Christopher P.

    1995-01-01

    A cermet crucible for metallurgically processing metals having high melting points comprising a body consisting essentially of a mixture of calcium oxide and erbium metal, the mixture comprising calcium oxide in a range between about 50 and 90% by weight and erbium metal in a range between about 10 and 50% by weight.

  10. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  11. Weaker Ligands Can Dominate an Odor Blend due to Syntopic Interactions

    PubMed Central

    2013-01-01

    Most odors in natural environments are mixtures of several compounds. Perceptually, these can blend into a new “perfume,” or some components may dominate as elements of the mixture. In order to understand such mixture interactions, it is necessary to study the events at the olfactory periphery, down to the level of single-odorant receptor cells. Does a strong ligand present at a low concentration outweigh the effect of weak ligands present at high concentrations? We used the fruit fly receptor dOr22a and a banana-like odor mixture as a model system. We show that an intermediate ligand at an intermediate concentration alone elicits the neuron’s blend response, despite the presence of both weaker ligands at higher concentration, and of better ligands at lower concentration in the mixture. Because all of these components, when given alone, elicited significant responses, this reveals specific mixture processing already at the periphery. By measuring complete dose–response curves we show that these mixture effects can be fully explained by a model of syntopic interaction at a single-receptor binding site. Our data have important implications for how odor mixtures are processed in general, and what preprocessing occurs before the information reaches the brain. PMID:23315042

  12. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  13. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  14. Surface modification of active material structures in battery electrodes

    DOEpatents

    Erickson, Michael; Tikhonov, Konstantin

    2016-02-02

    Provided herein are methods of processing electrode active material structures for use in electrochemical cells or, more specifically, methods of forming surface layers on these structures. The structures are combined with a liquid to form a mixture. The mixture includes a surface reagent that chemically reacts and forms a surface layer covalently bound to the structures. The surface reagent may be a part of the initial liquid or added to the mixture after the liquid is combined with the structures. In some embodiments, the mixture may be processed to form a powder containing the structures with the surface layer thereon. Alternatively, the mixture may be deposited onto a current collecting substrate and dried to form an electrode layer. Furthermore, the liquid may be an electrolyte containing the surface reagent and a salt. The liquid soaks the previously arranged electrodes in order to contact the structures with the surface reagent.

  15. Application of wavelet and Fuorier transforms as powerful alternatives for derivative spectrophotometry in analysis of binary mixtures: A comparative study

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Abdel-Gawad, Sherif A.

    2018-02-01

    Two signal processing methods, namely, Continuous Wavelet Transform (CWT) and the second was Discrete Fourier Transform (DFT) were introduced as alternatives to the classical Derivative Spectrophotometry (DS) in analysis of binary mixtures. To show the advantages of these methods, a comparative study was performed on a binary mixture of Naltrexone (NTX) and Bupropion (BUP). The methods were compared by analyzing laboratory prepared mixtures of the two drugs. By comparing performance of the three methods, it was proved that CWT and DFT methods are more efficient and advantageous in analysis of mixtures with overlapped spectra than DS. The three signal processing methods were adopted for the quantification of NTX and BUP in pure and tablet forms. The adopted methods were validated according to the ICH guideline where accuracy, precision and specificity were found to be within appropriate limits.

  16. Synthesis of fine-grained .alpha.-silicon nitride by a combustion process

    DOEpatents

    Holt, J. Birch; Kingman, Donald D.; Bianchini, Gregory M.

    1990-01-01

    A combustion synthesis process for the preparation of .alpha.-silicon nitride and composites thereof is disclosed. Preparation of the .alpha.-silicon nitride comprises the steps of dry mixing silicon powder with an alkali metal azide, such as sodium azide, cold-pressing the mixture into any desired shape, or loading the mixture into a fused, quartz crucible, loading the crucible into a combustion chamber, pressurizing the chamber with nitrogen and igniting the mixture using an igniter pellet. The method for the preparation of the composites comprises dry mixing silicon powder (Si) or SiO.sub.2, with a metal or metal oxide, adding a small amount of an alkali metal azide such as sodium azide, introducing the mixture into a suitable combustion chamber, pressurizing the combustion chamber with nitrogen, igniting the mixture within the combustion chamber, and isolating the .alpha.-silicon nitride formed as a reaction product.

  17. Correlating the cold flow and melting properties of fatty acid methyl ester (FAME) mixtures

    USDA-ARS?s Scientific Manuscript database

    Fatty acid methyl ester (FAME) mixtures derived from plant oils or animal fats are used to make biodiesel, lubricants, surfactants, plasticizers, ink solvents, paint strippers and other products. Processing requires a precise knowledge of the physico-chemical properties of mixtures with diverse and ...

  18. Dynamics and associations of microbial community types across the human body

    PubMed Central

    Ding, Tao; Schloss, Patrick D.

    2014-01-01

    A primary goal of the Human Microbiome Project (HMP) was to provide a reference collection of 16S rRNA gene sequences collected from sites across the human body that would allow microbiologists to better associate changes in the microbiome with changes in health 1. The HMP Consortium has reported the structure and function of the human microbiome in 300 healthy adults at 18 body sites from a single time point 2,3. Using additional data collected over the course of 12–18 months, we used Dirichlet multinomial mixture models 4 to partition the data into community types for each body site and made three important observations. First, there were strong associations between whether they had been breastfed as an infant, their gender, and their level of education with their community types at several body sites. Second, although the specific taxonomic compositions of the oral and gut microbiomes were different, the community types observed at these sites these sites were predictive of each other. Finally, over the course of the sampling period, the community types from sites within the oral cavity were the least stable, while those in the vagina and gut were the most stable. Our results demonstrate that even with the considerable intra- and inter-personal variation in the human microbiome, this variation can be partitioned into community types that are predictive of each other and are likely the result of life history characteristics. Understanding the diversity of community types and the mechanisms that result in an individual having a particular type or changing types, will allow us to use their community types to assess disease risk and to personalize therapies. PMID:24739969

  19. Clusternomics: Integrative context-dependent clustering for heterogeneous datasets

    PubMed Central

    Wernisch, Lorenz

    2017-01-01

    Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm. PMID:29036190

  20. Clusternomics: Integrative context-dependent clustering for heterogeneous datasets.

    PubMed

    Gabasova, Evelina; Reid, John; Wernisch, Lorenz

    2017-10-01

    Integrative clustering is used to identify groups of samples by jointly analysing multiple datasets describing the same set of biological samples, such as gene expression, copy number, methylation etc. Most existing algorithms for integrative clustering assume that there is a shared consistent set of clusters across all datasets, and most of the data samples follow this structure. However in practice, the structure across heterogeneous datasets can be more varied, with clusters being joined in some datasets and separated in others. In this paper, we present a probabilistic clustering method to identify groups across datasets that do not share the same cluster structure. The proposed algorithm, Clusternomics, identifies groups of samples that share their global behaviour across heterogeneous datasets. The algorithm models clusters on the level of individual datasets, while also extracting global structure that arises from the local cluster assignments. Clusters on both the local and the global level are modelled using a hierarchical Dirichlet mixture model to identify structure on both levels. We evaluated the model both on simulated and on real-world datasets. The simulated data exemplifies datasets with varying degrees of common structure. In such a setting Clusternomics outperforms existing algorithms for integrative and consensus clustering. In a real-world application, we used the algorithm for cancer subtyping, identifying subtypes of cancer from heterogeneous datasets. We applied the algorithm to TCGA breast cancer dataset, integrating gene expression, miRNA expression, DNA methylation and proteomics. The algorithm extracted clinically meaningful clusters with significantly different survival probabilities. We also evaluated the algorithm on lung and kidney cancer TCGA datasets with high dimensionality, again showing clinically significant results and scalability of the algorithm.

  1. Oral microbial community typing of caries and pigment in primary dentition.

    PubMed

    Li, Yanhui; Zou, Cheng-Gang; Fu, Yu; Li, Yanhong; Zhou, Qing; Liu, Bo; Zhang, Zhigang; Liu, Juan

    2016-08-05

    Black extrinsic discoloration in primary dentition is a common clinical and aesthetic problem that can co-occur with dental caries, the most common oral diseases in childhood. Although the role of bacteria in the formation of pigment and caries in primary dentition is important, their basic features still remain a further mystery. Using targeted sequencing of the V1-V3 hypervariable regions of bacterial 16S ribosomal RNA (rRNA) genes, we obtained a dataset consisting of 831,381 sequences from 111 saliva samples and 110 supragingival plaque samples from 40 patients with pigment (black extrinsic stain), 20 with caries (obvious decay), and 25 with both pigment and caries and from 26 healthy individuals. We applied a Dirichlet multinomial mixture (DMM)-based community typing approach to investigate oral microbial community types. Our results revealed significant structural segregation of microbial communities, as indicated by the identification of two plaque community types (A and B) and three saliva community types (C-E). We found that the independent occurrence of the two plaque community types, A and B, was potentially associated with our oral diseases of interest. For type A, three co-occurring bacterial genus pairs could separately play a potential role in the formation of pigment (Leptotrichia and Fusobacterium), caries (unclassified Gemellales and Granulicatella), and mixed caries and pigment (Streptococcus and Mogibacterium). For type B, three co-occurring bacterial genera (unclassified Clostridiaceae, Peptostreptococcus, and Clostridium) were related to mixed pigment and caries. Three dominant bacterial genera (Selenomonas, Gemella, and Streptobacillus) were linked to the presence of caries. Our study demonstrates that plaque-associated oral microbial communities could majorly contribute to the formation of pigment and caries in primary dentition and suggests potential clinical applications of monitoring oral microbiota as an indicator for disease diagnosis and prognosis.

  2. Numerical model for the evaluation of Earthquake effects on a magmatic system.

    NASA Astrophysics Data System (ADS)

    Garg, Deepak; Longo, Antonella; Papale, Paolo

    2016-04-01

    A finite element numerical model is presented to compute the effect of an Earthquake on the dynamics of magma in reservoirs with deformable walls. The magmatic system is hit by a Mw 7.2 Earthquake (Petrolia/Capo Mendocina 1992) with hypocenter at 15 km diagonal distance. At subsequent times the seismic wave reaches the nearest side of the magmatic system boundary, travels through the magmatic fluid and arrives to the other side of the boundary. The modelled physical system consists in the magmatic reservoir with a thin surrounding layer of rocks. Magma is considered as an homogeneous multicomponent multiphase Newtonian mixture with exsolution and dissolution of volatiles (H2O+CO2). The magmatic reservoir is made of a small shallow magma chamber filled with degassed phonolite, connected by a vertical dike to a larger deeper chamber filled with gas-rich shoshonite, in condition of gravitational instability. The coupling between the Earthquake and the magmatic system is computed by solving the elastostatic equation for the deformation of the magmatic reservoir walls, along with the conservation equations of mass of components and momentum of the magmatic mixture. The characteristic elastic parameters of rocks are assigned to the computational domain at the boundary of magmatic system. Physically consistent Dirichlet and Neumann boundary conditions are assigned according to the evolution of the seismic signal. Seismic forced displacements and velocities are set on the part of the boundary which is hit by wave. On the other part of boundary motion is governed by the action of fluid pressure and deviatoric stress forces due to fluid dynamics. The constitutive equations for the magma are solved in a monolithic way by space-time discontinuous-in-time finite element method. To attain additional stability least square and discontinuity capturing operators are included in the formulation. A partitioned algorithm is used to couple the magma and thin layer of rocks. The magmatic system is highly disturbed during the maximum amplitude of the seismic wave, showing random to oscillatory velocity and pressure, after which it follows the natural dynamic state of gravitational destabilization. The seismic disturbance remarkably triggers propagation of pressure waves at magma sound speed, reflecting from bottom to top, left and right of the magmatic system. A signal analysis of the frequency energy content is reported.

  3. Process for oxidation of hydrogen halides to elemental halogens

    DOEpatents

    Lyke, Stephen E.

    1992-01-01

    An improved process for generating an elemental halogen selected from chlorine, bromine or iodine, from a corresponding hydrogen halide by absorbing a molten salt mixture, which includes sulfur, alkali metals and oxygen with a sulfur to metal molar ratio between 0.9 and 1.1 and includes a dissolved oxygen compound capable of reacting with hydrogen halide to produce elemental halogen, into a porous, relatively inert substrate to produce a substrate-supported salt mixture. Thereafter, the substrate-supported salt mixture is contacted (stage 1) with a hydrogen halide while maintaining the substrate-supported salt mixture during the contacting at an elevated temperature sufficient to sustain a reaction between the oxygen compound and the hydrogen halide to produce a gaseous elemental halogen product. This is followed by purging the substrate-supported salt mixture with steam (stage 2) thereby recovering any unreacted hydrogen halide and additional elemental halogen for recycle to stage 1. The dissolved oxygen compound is regenerated in a high temperature (stage 3) and an optical intermediate temperature stage (stage 4) by contacting the substrate-supported salt mixture with a gas containing oxygen whereby the dissolved oxygen compound in the substrate-supported salt mixture is regenerated by being oxidized to a higher valence state.

  4. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Process for making ultra-fine ceramic particles

    DOEpatents

    Stangle, Gregory C.; Venkatachari, Koththavasal R.; Ostrander, Steven P.; Schulze, Walter A.

    1995-01-01

    A process for producing ultra-fine ceramic particles in which droplets are formed from a ceramic precursor mixture containing a metal cation, a nitrogen-containing fuel, a solvent, and an anion capable of participating in an anionic oxidation-reduction reaction with the nitrogen containing fuel. The nitrogen-containing fuel contains at least three nitrogen atoms, at least one oxygen atom, and at least one carbon atom. The ceramic precursor mixture is dried to remove at least 85 weight percent of the solvent, and the dried mixture is then ignited to form a combusted powder.

  6. Treatment of mercury containing waste

    DOEpatents

    Kalb, Paul D.; Melamed, Dan; Patel, Bhavesh R; Fuhrmann, Mark

    2002-01-01

    A process is provided for the treatment of mercury containing waste in a single reaction vessel which includes a) stabilizing the waste with sulfur polymer cement under an inert atmosphere to form a resulting mixture and b) encapsulating the resulting mixture by heating the mixture to form a molten product and casting the molten product as a monolithic final waste form. Additional sulfur polymer cement can be added in the encapsulation step if needed, and a stabilizing additive can be added in the process to improve the leaching properties of the waste form.

  7. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  8. Synthesis of highly phase pure BSCCO superconductors

    DOEpatents

    Dorris, S.E.; Poeppel, R.B.; Prorok, B.C.; Lanagan, M.T.; Maroni, V.A.

    1995-11-21

    An article and method of manufacture (Bi, Pb)-Sr-Ca-Cu-O superconductor are disclosed. The superconductor is manufactured by preparing a first powdered mixture of bismuth oxide, lead oxide, strontium carbonate, calcium carbonate and copper oxide. A second powdered mixture is then prepared of strontium carbonate, calcium carbonate and copper oxide. The mixtures are calcined separately with the two mixtures then combined. The resulting combined mixture is then subjected to a powder in tube deformation and thermal processing to produce a substantially phase pure (Bi, Pb)-Sr-Ca-Cu-O superconductor. 5 figs.

  9. Synthesis of highly phase pure BSCCO superconductors

    DOEpatents

    Dorris, Stephen E.; Poeppel, Roger B.; Prorok, Barton C.; Lanagan, Michael T.; Maroni, Victor A.

    1995-01-01

    An article and method of manufacture of (Bi, Pb)-Sr-Ca-Cu-O superconductor. The superconductor is manufactured by preparing a first powdered mixture of bismuth oxide, lead oxide, strontium carbonate, calcium carbonate and copper oxide. A second powdered mixture is then prepared of strontium carbonate, calcium carbonate and copper oxide. The mixtures are calcined separately with the two mixtures then combined. The resulting combined mixture is then subjected to a powder in tube deformation and thermal processing to produce a substantially phase pure (Bi, Pb)-Sr-Ca-Cu-O superconductor.

  10. Synthesis of highly phase pure (Bi, Pb)-Sr-Ca-Cu-O superconductor

    DOEpatents

    Dorris, Stephen E.; Poeppel, Roger B.; Prorok, Barton C.; Lanagan, Michael T.; Maroni, Victor A.

    1994-01-01

    An article and method of manufacture of (Bi,Pb)-Sr-Ca-Cu-O superconductor. The superconductor is manufactured by preparing a first powdered mixture of bismuth oxide, lead oxide, strontium carbonate, calcium carbonate and copper oxide. A second powdered mixture is then prepared of strontium carbonate, calcium carbonate and copper oxide. The mixtures are calcined separately with the two mixtures then combined. The resulting combined mixture is then subjected to a powder in tube deformation and thermal processing to produce a substantially phase pure (Bi,Pb)-Sr-Ca-Cu-O superconductor.

  11. Near azeotropic mixture substitute for dichlorodifluoromethane

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor)

    1998-01-01

    A refrigerant and a process of formulating thereof that consists of a mixture of a first mole fraction of CH.sub.2 FCF.sub.3 and a second mole fraction of a component selected from the group consisting of a mixture of CHClFCF.sub.3 and CH.sub.3 CClF.sub.2 ; a mixture of CHF.sub.2 CH.sub.3 and CH.sub.3 CClF.sub.2 ; and a mixture of CHClFCF.sub.3, CH.sub.3 CClF.sub.2 and CHF.sub.2 CH.sub.3.

  12. DIRECT INGOT PROCESS FOR PRODUCING URANIUM

    DOEpatents

    Leaders, W.M.; Knecht, W.S.

    1960-11-15

    A process is given in which uranium tetrafluoride is reduced to the metal with magnesium and in the same step the uranium metal formed is cast into an ingot. For this purpose a mold is arranged under and connected with the reaction bomb, and both are filled with the reaction mixture. The entire mixture is first heated to just below reaction temperature, and thereafter heating is restricted to the mixture in the mold. The reaction starts in the mold whereby heat is released which brings the rest of the mixture to reaction temperature. Pure uranium metal settles in the mold while the magnesium fluoride slag floats on top of it. After cooling, the uranium is separated from the slag by mechanical means.

  13. Process feasibility study in support of silicon material, task 1

    NASA Technical Reports Server (NTRS)

    Li, K. Y.; Hansen, K. C.; Yaws, C. L.

    1979-01-01

    Analyses of process system properties were continued for materials involved in the alternate processes under consideration for semiconductor silicon. Primary efforts centered on physical and thermodynamic property data for dichlorosilane. The following property data are reported for dichlorosilane which is involved in processing operations for solar cell grade silicon: critical temperature, critical pressure, critical volume, critical density, acentric factor, vapor pressure, heat of vaporization, gas heat capacity, liquid heat capacity and density. Work was initiated on the assembly of a system to prepare binary gas mixtures of known proportions and to measure the thermal conductivity of these mixtures between 30 and 350 C. The binary gas mixtures include silicon source material such as silanes and halogenated silanes which are used in the production of semiconductor silicon.

  14. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  15. Intermetallic nanoparticles

    DOEpatents

    Singh, Dileep; Yusufoglu, Yusuf; Timofeeva, Elena; Routbort, Jules

    2015-07-14

    A process for preparing intermetallic nanoparticles of two or more metals is provided. In particular, the process includes the steps: a) dispersing nanoparticles of a first metal in a solvent to prepare a first metal solution, b) forming a reaction mixture with the first metal solution and a reducing agent, c) heating the reaction mixture to a reaction temperature; and d) adding a second metal solution containing a salt of a second metal to the reaction mixture. During this process, intermetallic nanoparticles, which contain a compound with the first and second metals are formed. The intermetallic nanoparticles with uniform size and a narrow size distribution is also provided. An electrochemical device such as a battery with the intermetallic nanoparticles is also provided.

  16. Intermetallic nanoparticles

    DOEpatents

    Singh, Dileep; Yusufoglu, Yusuf; Timofeeva, Elena; Routbort, Jules L.

    2015-11-20

    A process for preparing intermetallic nanoparticles of two or more metals is provided. In particular, the process includes the steps: a) dispersing nanoparticles of a first metal in a solvent to prepare a first metal solution, b) forming a reaction mixture with the first metal solution and a reducing agent, c) heating the reaction mixture to a reaction temperature; and d) adding a second metal solution containing a salt of a second metal to the reaction mixture. During this process, intermetallic nanoparticles, which contain a compound with the first and second metals are formed. The intermetallic nanoparticles with uniform size and a narrow size distribution is also provided. An electrochemical device such as a battery with the intermetallic nanoparticles is also provided.

  17. Intermetallic nanoparticles

    DOEpatents

    Singh, Dileep; Yusufoglu, Yusuf; Timofeeva, Elena; Routbort, Jules L.

    2017-01-03

    A process for preparing intermetallic nanoparticles of two or more metals is provided. In particular, the process includes the steps: a) dispersing nanoparticles of a first metal in a solvent to prepare a first metal solution, b) forming a reaction mixture with the first metal solution and a reducing agent, c) heating the reaction mixture to a reaction temperature; and d) adding a second metal solution containing a salt of a second metal to the reaction mixture. During this process, intermetallic nanoparticles, which contain a compound with the first and second metals are formed. The intermetallic nanoparticles with uniform size and a narrow size distribution is also provided. An electrochemical device such as a battery with the intermetallic nanoparticles is also provided.

  18. Chemistry in Titan

    NASA Astrophysics Data System (ADS)

    Plessis, S.; Carrasco, N.; Pernot, P.

    2009-04-01

    Modelling the chemical composition of Titan's ionosphere is a very challenging issue. Latest works perform either inversion of CASSINI's INMS mass spectra (neutral[1] or ion[2]), or design coupled ion-neutral chemistry models[3]. Coupling ionic and neutral chemistry has been reported to be an essential feature of accurate modelling[3]. Electron Dissociative Recombination (EDR), where free electrons recombine with positive ions to produce neutral species, is a key component of ion-neutral coupling. There is a major difficulty in EDR modelling: for heavy ions, the distribution of neutral products is incompletely characterized by experiments. For instance, for some hydrocarbon ions only the carbon repartition is measured, leaving the hydrogen repartition and thus the exact neutral species identity unknown[4]. This precludes reliable deterministic modelling of this process and of ion-neutral coupling. We propose a novel stochastic description of the EDR chemical reactions which enables efficient representation and simulation of the partial experimental knowledge. The description of products distribution in multi-pathways reactions is based on branching ratios, which should sum to unity. The keystone of our approach is the design of a probability density function accounting for all available informations and physical constrains. This is done by Dirichlet modelling which enables one to sample random variables whose sum is constant[5]. The specifics of EDR partial uncertainty call for a hierarchiral Dirichlet representation, which generalizes our previous work[5]. We present results on the importance of ion-neutral coupling based on our stochastic model. C repartition H repartition (measured) (unknown ) → C4H2 + 3H2 + H .. -→ C4 . → C4H2 + 7H → C3H8. + CH C4H+9 + e- -→ C3 + C .. → C3H3 + CH2 + 2H2 → C2H6 + C2H2 + H .. -→ C2 + C2 . → 2C2H2 + 2H2 + H (1) References [1] J. Cui, R.V. Yelle, V. Vuitton, J.H. Waite Jr., W.T. Kasprzak, D.A. Gell, H.B. Niemann, I.C.F. Müller-Wodarg, N. Borggren, G.G. Fletcher, E.L. Patrick, E. Raaen, and B.A. Magee. Analysis of Titan's neutral upper atmosphere from Cassini ion neutral mass spectrometer measurements. Icarus, In Press, Accepted Manuscript:-, 2008. [2] V. Vuitton, R. V. Yelle, and M.J. McEwan. Ion chemistry and N-containing molecules in Titan's upper atmosphere. Icarus, 191:722-742, 2007. [3] V. De La Haye, J.H. Waite Jr., T.E. Cravens, I.P. Robertson, and S. Lebonnois. Coupled ion and neutral rotating model of Titan's upper atmosphere. Icarus, 197(1):110 - 136, 2008. [4] J. B. A. Mitchell, C. Rebrion-Rowe, J. L. Le Garrec, G. Angelova, H. Bluhme, K. Seiersen, and L. H. Andersen. Branching ratios for the dissociative recombination of hydrocarbon ions. I: The cases of C4H9+ and C4H5+. International Journal of Mass Spectrometry, 227(2):273-279, June 2003. [5] N. Carrasco and P. Pernot. Modeling of branching ratio uncertainty in chemical networks by Dirichlet distributions. Journal of Physical Chemistry A, 11(18):3507-3512, 2007.

  19. An approach of characterizing the degree of spatial color mixture

    NASA Astrophysics Data System (ADS)

    Chu, Miao; Tian, Shaohui

    2017-07-01

    The digital camouflage technology arranges different color mosaics according to a certain rules, compared with traditional camouflage, it has more outstanding results deal with different distance reconnaissance. The better result of digital camouflage is mainly attributed to spatial color mixture, and is also a key factor to improve digital camouflage design. However, the research of space color mixture is relatively lack, cannot provide inadequate support for digital camouflage design. Therefore, according to the process of spatial color mixture, this paper proposes an effective parameter, spatial-color-mixture ratio, to characterize the degree of spatial color mixture. The experimental results show that spatial-color-mixture ratio is feasible and effective in practice, which could provide a new direction for further research on digital camouflage.

  20. Cermet crucible for metallurgical processing

    DOEpatents

    Boring, C.P.

    1995-02-14

    A cermet crucible is disclosed for metallurgically processing metals having high melting points comprising a body consisting essentially of a mixture of calcium oxide and erbium metal, the mixture comprising calcium oxide in a range between about 50 and 90% by weight and erbium metal in a range between about 10 and 50% by weight.

  1. Development of mesoporous structures of composite silica particles with various organic functional groups in the presence and absence of ammonia catalyst

    NASA Astrophysics Data System (ADS)

    Park, Tae Jae; Jung, Gyu Il; Kim, Euk Hyun; Koo, Sang Man

    2017-06-01

    Development of mesoporous structures of composite silica particles with various organic functional groups was investigated by using a two-step process, consisting of one-pot sol-gel process in the presence and absence of ammonium hydroxide and a selective dissolution process with an ethanol-water mixture. Five different organosilanes, including methyltrimethoxysilane (MTMS), 3-mercaptopropyltrimethoxysilane (MPTMS), phenyltrimethoxysilane (PTMS), vinyltrimethoxysilane (VTMS), and 3-aminopropyltrimethoxysilane (APTMS) were employed. The mesoporous (organically modified silica) ORMOSIL particles were obtained even in the absence of ammonium hydroxide when the reaction mixture contained APTMS. The morphology of the particles, however, were different from those prepared with ammonia catalyst and the same organosilane mixtures, probably because the overall hydrolysis/condensation rates became slower. Co-existence of APTMS and VTMS was essential to prepare mesoporous particles from ternary organosilane mixtures. The work presented here demonstrates that organosilica particles with desired functionality and desired mesoporous structures can be obtained by selecting proper types of organosilane monomers and performing a facile and mild process either with or without ammonium hydroxide.

  2. Dynamics of Uncrystallized Water, Ice, and Hydrated Protein in Partially Crystallized Gelatin-Water Mixtures Studied by Broadband Dielectric Spectroscopy.

    PubMed

    Sasaki, Kaito; Panagopoulou, Anna; Kita, Rio; Shinyashiki, Naoki; Yagihara, Shin; Kyritsis, Apostolos; Pissis, Polycarpos

    2017-01-12

    The glass transition of partially crystallized gelatin-water mixtures was investigated using broadband dielectric spectroscopy (BDS) over a wide range of frequencies (10 mHz to 10 MHz), temperatures (113-298 K), and concentrations (10-45 wt %). Three dielectric relaxation processes (processes I, II, and III) were clearly observed. Processes I, II, and III originate from uncrystallized water (UCW) in the hydration shells of gelatin, ice, and hydrated gelatin, respectively. A dynamic crossover, called the Arrhenius to non-Arrhenius transition of UCW, was observed at the glass transition temperature of the relaxation process of hydrated gelatin for all mixtures. The amount of UCW increases with increasing gelatin content. However, above 35 wt % gelatin, the amount of UCW became more dependent on the gelatin concentration. This increase in UCW causes a decrease in the glass transition temperature of the cooperative motion of gelatin and UCW, which appears to result from a change in the aggregation structure of gelatin in the mixture at a gelatin concentration of approximately 35 wt %. The temperature dependence of the relaxation time of process II has nearly the same activation energy as pure ice made by slow crystallization of ice Ih. This implies that process II originates from the dynamics of slowly crystallized ice Ih.

  3. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    NASA Astrophysics Data System (ADS)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  4. Vacuum distillation of a mixture of LiCl-KCl eutectic salts and RE oxidative precipitates and a dechlorination and oxidation of RE oxychlorides.

    PubMed

    Eun, Hee Chul; Yang, Hee Chul; Cho, Yung Zun; Lee, Han Soo; Kim, In Tae

    2008-12-30

    In this study, a vacuum distillation of a mixture of LiCl-KCl eutectic salt and rare-earth oxidative precipitates was performed to separate a pure LiCl-KCl eutectic salt from the mixture. Also, a dechlorination and oxidation of the rare-earth oxychlorides was carried out to stabilize a final waste form. The mixture was distilled under a range of 710-759.5Torr of a reduced pressure at a fixed heating rate of 4 degrees C/min and the LiCl-KCl eutectic salt was completely separated from the mixture. The required time for the salt distillation and the starting temperature for the salt vaporization were lowered with a reduction in the pressure. Dechlorination and oxidation of the rare-earth oxychlorides was completed at a temperature below 1300 degrees C and this was dependent on the partial pressure of O2. The rare-earth oxychlorides (NdOCl/PrOCl) were transformed to oxides (Nd2O3/PrO2) during the dechlorination and oxidation process. These results will be utilized to design a concept for a process for recycling the waste salt from an electrorefining process.

  5. Microlayered flow structure around an acoustically levitated droplet under a phase-change process

    PubMed Central

    Hasegawa, Koji; Abe, Yutaka; Goda, Atsushi

    2016-01-01

    The acoustic levitation method (ALM) has found extensive applications in the fields of materials science, analytical chemistry, and biomedicine. This paper describes an experimental investigation of a levitated droplet in a 19.4-kHz single-axis acoustic levitator. We used water, ethanol, water/ethanol mixture, and hexane as test samples to investigate the effect of saturated vapor pressure on the flow field and evaporation process using a high-speed camera. In the case of ethanol, water/ethanol mixtures with initial ethanol fractions of 50 and 70 wt%, and hexane droplets, microlayered toroidal vortexes are generated in the vicinity of the droplet interface. Experimental results indicate the presence of two stages in the evaporation process of ethanol and binary mixture droplets for ethanol content >10%. The internal and external flow fields of the acoustically levitated droplet of pure and binary mixtures are clearly observed. The binary mixture of the levitated droplet shows the interaction between the configurations of the internal and external flow fields of the droplet and the concentration of the volatile fluid. Our findings can contribute to the further development of existing theoretical prediction. PMID:28725723

  6. Electron temperature and density measurement of tungsten inert gas arcs with Ar-He shielding gas mixture

    NASA Astrophysics Data System (ADS)

    Kühn-Kauffeldt, M.; Marques, J.-L.; Forster, G.; Schein, J.

    2013-10-01

    The diagnostics of atmospheric welding plasma is a well-established technology. In most cases the measurements are limited to processes using pure shielding gas. However in many applications shielding gas is a mixture of various components including metal vapor in gas metal arc welding (GMAW). Shielding gas mixtures are intentionally used for tungsten inert gas (TIG) welding in order to improve the welding performance. For example adding Helium to Argon shielding gas allows the weld geometry and porosity to be influenced. Yet thermal plasmas produced with gas mixtures or metal vapor still require further experimental investigation. In this work coherent Thomson scattering is used to measure electron temperature and density in these plasmas, since this technique allows independent measurements of electron and ion temperature. Here thermal plasmas generated by a TIG process with 50% Argon and 50% Helium shielding gas mixture have been investigated. Electron temperature and density measured by coherent Thomson scattering have been compared to the results of spectroscopic measurements of the plasma density using Stark broadening of the 696.5 nm Argon spectral line. Further investigations of MIG processes using Thomson scattering technique are planned.

  7. Valorization of a pharmaceutical organic sludge through different composting treatments.

    PubMed

    Cucina, Mirko; Tacconi, Chiara; Sordi, Simone; Pezzolla, Daniela; Gigliotti, Giovanni; Zadra, Claudia

    2018-04-01

    Nowadays, the agricultural reuse of pharmaceutical sludge is still limited due to environmental and agronomic issues (e.g. low stabilization of the organic matter, phytotoxicity). The aim of the present study was to evaluate the characteristics of a pharmaceutical sludge derived from the daptomycin production and to study the possibility of improving its quality through composting. The pharmaceutical sludge showed high content of macronutrients (e.g. total Kjeldahl N content was 38 g kg -1 ), but it was also characterized by high salinity (7.9 dS m -1 ), phytotoxicity (germination index was 36.7%) and a low organic matter stabilization. Two different mixtures were prepared (mixture A: 70% sludge + 30% wood chips w/w, mixture B: 45% sludge + 45% wood chips + 10% cereal straw w/w) and treated through static composting using two different aeration systems: active and passive aeration. The mixtures resulted in the production of two different compost, and the evolution of process management parameters was different. The low total solids and organic matter content of mixture A led to the failure of the process. The addition of cereal straw in mixture B resulted in increased porosity and C/N ratio and, consequently, in an optimal development of the composting process (e.g. the final organic matter loss was 54.1% and 63.1% for the passively and actively aerated treatment, respectively). Both passively and actively aerated composting of mixture B improved the quality of the pharmaceutical sludge, by increasing its organic matter stabilization and removing phytotoxicity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Regenerative process and system for the simultaneous removal of particulates and the oxides of sulfur and nitrogen from a gas stream

    DOEpatents

    Cohen, Mitchell R.; Gal, Eli

    1993-01-01

    A process and system for simultaneously removing from a gaseous mixture, sulfur oxides by means of a solid sulfur oxide acceptor on a porous carrier, nitrogen oxides by means of ammonia gas and particulate matter by means of filtration and for the regeneration of loaded solid sulfur oxide acceptor. Finely-divided solid sulfur oxide acceptor is entrained in a gaseous mixture to deplete sulfur oxides from the gaseous mixture, the finely-divided solid sulfur oxide acceptor being dispersed on a porous carrier material having a particle size up to about 200 microns. In the process, the gaseous mixture is optionally pre-filtered to remove particulate matter and thereafter finely-divided solid sulfur oxide acceptor is injected into the gaseous The government of the United States of America has rights in this invention pursuant to Contract No. DE-AC21-88MC 23174 awarded by the U.S. Department of Energy.

  9. Synthesis of highly phase pure (Bi, Pb)-Sr-Ca-Cu-O superconductor

    DOEpatents

    Dorris, S.E.; Poeppel, R.B.; Prorok, B.C.; Lanagan, M.T.; Maroni, V.A.

    1994-10-11

    An article and method of manufacture of (Bi,Pb)-Sr-Ca-Cu-O superconductor are disclosed. The superconductor is manufactured by preparing a first powdered mixture of bismuth oxide, lead oxide, strontium carbonate, calcium carbonate and copper oxide. A second powdered mixture is then prepared of strontium carbonate, calcium carbonate and copper oxide. The mixtures are calcined separately with the two mixtures then combined. The resulting combined mixture is then subjected to a powder in tube deformation and thermal processing to produce a substantially phase pure (Bi,Pb)-Sr-Ca-Cu-O superconductor. 5 figs.

  10. Production and delivery of a fluid mixture to an annular volume of a wellbore

    DOEpatents

    Hermes, Robert E [Los Alamos, NM; Bland, Ronald Gene [Houston, TX; Foley, Ron Lee [Magnolia, TX; Bloys, James B [Katy, TX; Gonzalez, Manuel E [Kingwood, NM; Daniel, John M [Germantown, TN; Robinson, Ian M [Guisborough, GB; Carpenter, Robert B [Tomball, TX

    2012-01-24

    The methods described herein generally relate to preparing and delivering a fluid mixture to a confined volume, specifically an annular volume located between two concentrically oriented casing strings within a hydrocarbon fluid producing well. The fluid mixtures disclosed herein are useful in controlling pressure in localized volumes. The fluid mixtures comprise at least one polymerizable monomer and at least one inhibitor. The processes and methods disclosed herein allow the fluid mixture to be stored, shipped and/or injected into localized volumes, for example, an annular volume defined by concentric well casing strings.

  11. The forces on a single interacting Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Thu, Nguyen Van

    2018-04-01

    Using double parabola approximation for a single Bose-Einstein condensate confined between double slabs we proved that in grand canonical ensemble (GCE) the ground state with Robin boundary condition (BC) is favored, whereas in canonical ensemble (CE) our system undergoes from ground state with Robin BC to the one with Dirichlet BC in small-L region and vice versa for large-L region and phase transition in space of the ground state is the first order. The surface tension force and Casimir force are also considered in both CE and GCE in detail.

  12. Application of fractional derivative with exponential law to bi-fractional-order wave equation with frictional memory kernel

    NASA Astrophysics Data System (ADS)

    Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.

    2017-12-01

    Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.

  13. Faà di Bruno's formula and the distributions of random partitions in population genetics and physics.

    PubMed

    Hoppe, Fred M

    2008-06-01

    We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.

  14. Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius

    NASA Astrophysics Data System (ADS)

    Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.

    2011-06-01

    We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.

  15. Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius

    NASA Astrophysics Data System (ADS)

    Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.

    We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.

  16. Conference on Ordinary and Partial Differential Equations, 29 March to 2 April 1982.

    DTIC Science & Technology

    1982-04-02

    Azztr. Boundary value problems for elliptic and parabolic equations in domains with corners The paper concerns initial - Dirichlet and initial - mixed...boundary value problems for parabolic equations. a ij(x,t)u x + ai(x,t)Ux. + a(x,t)u-u = f(x,t) i3 1 x Xl,...,Xn , n 2. We consider the case of...moment II Though it is well known, that the electron possesses an anomalous magnetic moment, this term has not been considered so far in the mathematical

  17. Introduction to Real Orthogonal Polynomials

    DTIC Science & Technology

    1992-06-01

    uses Green’s functions. As motivation , consider the Dirichlet problem for the unit circle in the plane, which involves finding a harmonic function u(r...xv ; a, b ; q) - TO [q-N ab+’q ; q, xq b. Orthogoy RMotion O0 (bq :q)x p.(q* ; a, b ; q) pg(q’ ; a, b ; q) (q "q), (aq)x (q ; q), (I -abq) (bq ; q... motivation and justi- fication for continued study of the intrinsic structure of orthogonal polynomials. 99 LIST OF REFERENCES 1. Deyer, W. M., ed., CRC

  18. On the existence of mosaic-skeleton approximations for discrete analogues of integral operators

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Taltykina, M. Yu.

    2017-09-01

    Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.

  19. The Theory and Practice of the h-p Version of Finite Element Method.

    DTIC Science & Technology

    1987-04-01

    1Wr-194 ’The problem with none-hmogeneous Dirichlet problem is to find the finite element solution u. £ data was studied by Babuika, Guo.im- 4401 The h...implemented in the coasmercial code PROOE . by Noetic Tech., St. Louis. See (27,281. The commer- IuS -u 01 1 C(SIS2)Z(u0,HI,S1) (2.3) cial program FIESTA...collaboration with govern- ment agencies such as the National Bureau of Standards. o To be an international center of study and research for foreign

  20. Global bifurcation of solutions of the mean curvature spacelike equation in certain Friedmann-Lemaître-Robertson-Walker spacetimes

    NASA Astrophysics Data System (ADS)

    Dai, Guowei; Romero, Alfonso; Torres, Pedro J.

    2018-06-01

    We study the existence of spacelike graphs for the prescribed mean curvature equation in the Friedmann-Lemaître-Robertson-Walker (FLRW) spacetime. By using a conformal change of variable, this problem is translated into an equivalent problem in the Lorentz-Minkowski spacetime. Then, by using Rabinowitz's global bifurcation method, we obtain the existence and multiplicity of positive solutions for this equation with 0-Dirichlet boundary condition on a ball. Moreover, the global structure of the positive solution set is studied.

  1. Contribution to the benchmark for ternary mixtures: Transient analysis in microgravity conditions.

    PubMed

    Ahadi, Amirhossein; Ziad Saghir, M

    2015-04-01

    We present a transient experimental analysis of the DCMIX1 project conducted onboard the International Space Station for a ternary tetrahydronaphtalene, isobutylbenzene, n-dodecane mixture. Raw images taken in microgravity environment using the SODI (Selectable Optical Diagnostic) apparatus which is equipped with two wavelength diagnostic were processed and the results were analyzed in this work. We measured the concentration profile of the mixture containing 80% THN, 10% IBB and 10% nC12 during the entire experiment using an advanced image processing technique and accordingly we determined the Soret coefficients using an advanced curve-fitting and post-processing technique. It must be noted that the experiment has been repeated five times to ensure the repeatability of the experiment.

  2. System configured for applying a modifying agent to a non-equidimensional substrate

    DOEpatents

    Janikowski,; Stuart K. , Argyle; Mark D. , Fox; Robert V. , Propp; W Alan, Toth [Idaho Falls, ID; William J. , Ginosar; Daniel M. , Allen; Charles A. , Miller; David, L [Idaho Falls, ID

    2007-07-10

    The present invention is related to systems and methods for modifying various non-equidimensional substrates with modifying agents. The system comprises a processing chamber configured for passing the non-equidimensional substrate therethrough, wherein the processing chamber is further configured to accept a treatment mixture into the chamber during movement of the non-equidimensional substrate through the processing chamber. The treatment mixture can comprise of the modifying agent in a carrier medium, wherein the carrier medium is selected from the group consisting of a supercritical fluid, a near-critical fluid, a superheated fluid, a superheated liquid, and a liquefied gas. Thus, the modifying agent can be applied to the non-equidimensional substrate upon contact between the treatment mixture and the non-equidimensional substrate.

  3. Development of a process for the extraction of {sup 137}Cs from acidic HLLW based on crown-calix extractant use of di-alkylamide modifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexova, J.; Sirova, M.; Rais, J.

    2008-07-01

    Within the framework of the ARTIST project of total fuel retreatment with ecological mixtures of solvents and extractants containing only C, H, O, and N atoms, a process segment of extraction of {sup 137}Cs from acidic stream was developed. The process with 25,27-Bis(1-octyloxy)calix[4]arene-crown- 6, DOC[4]C6, dissolved at its 0.01 M concentration in a mixture of 90 vol % 1-octanol and 10% dihexyl octanamide, DHOA was proposed as a viable variant due to its good multicycle performance, even with irradiated solvent, and due to the good chemical stability of the chosen combination of solvent mixture. (authors)

  4. System configured for applying a modifying agent to a non-equidimensional substrate

    DOEpatents

    Janikowski, Stuart K.; Toth, William J.; Ginosar, Daniel M.; Allen, Charles A.; Argyle, Mark D.; Fox, Robert V.; Propp, W. Alan; Miller, David L.

    2003-09-23

    The present invention is related to systems and methods for modifying various non-equidimensional substrates with modifying agents. The system comprises a processing chamber configured for passing the non-equidimensional substrate therethrough, wherein the processing chamber is further configured to accept a treatment mixture into the chamber during movement of the non-equidimensional substrate through the processing chamber. The treatment mixture can comprise of the modifying agent in a carrier medium, wherein the carrier medium is selected from the group consisting of a supercritical fluid, a near-critical fluid, a superheated fluid, a superheated liquid, and a liquefied gas. Thus, the modifying agent can be applied to the non-equidimensional substrate upon contact between the treatment mixture and the non-equidimensional substrate.

  5. Catalytic partial oxidation of hydrocarbons

    DOEpatents

    Schmidt, Lanny D.; Krummenacher, Jakob J.; West, Kevin N.

    2007-08-28

    A process for the production of a reaction product including a carbon containing compound. The process includes providing a film of a fuel source including at least one organic compound on a wall of a reactor, contacting the fuel source with a source of oxygen, forming a vaporized mixture of fuel and oxygen, and contacting the vaporized mixture of fuel and oxygen with a catalyst under conditions effective to produce a reaction product including a carbon containing compound. Preferred products include .alpha.-olefins and synthesis gas. A preferred catalyst is a supported metal catalyst, preferably including rhodium, platinum, and mixtures thereof.

  6. Catalytic partial oxidation of hydrocarbons

    DOEpatents

    Schmidt, Lanny D [Minneapolis, MN; Krummenacher, Jakob J [Minneapolis, MN; West, Kevin N [Minneapolis, MN

    2009-05-19

    A process for the production of a reaction product including a carbon containing compound. The process includes providing a film of a fuel source including at least one organic compound on a wall of a reactor, contacting the fuel source with a source of oxygen, forming a vaporized mixture of fuel and oxygen, and contacting the vaporized mixture of fuel and oxygen with a catalyst under conditions effective to produce a reaction product including a carbon containing compound. Preferred products include .alpha.-olefins and synthesis gas. A preferred catalyst is a supported metal catalyst, preferably including rhodium, platinum, and mixtures thereof.

  7. PROCESS FOR PRODUCTION OF URANIUM

    DOEpatents

    Crawford, J.W.C.

    1959-09-29

    A process is described for the production of uranium by the autothermic reduction of an anhydrous uranium halide with an alkaline earth metal, preferably magnesium One feature is the initial reduction step which is brought about by locally bringing to reaction temperature a portion of a mixture of the reactants in an open reaction vessel having in contact with the mixture a lining of substantial thickness composed of calcium fluoride. The lining is prepared by coating the interior surface with a plastic mixture of calcium fluoride and water and subsequently heating the coating in situ until at last the exposed surface is substantially anhydrous.

  8. 78 FR 41768 - Chemical Substances and Mixtures Used in Oil and Gas Exploration or Production; TSCA Section 21...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-11

    ... Substances and Mixtures Used in Oil and Gas Exploration or Production; TSCA Section 21 Petition; Reasons for... processors of oil and gas exploration and production (E&P) chemical substances and mixtures to maintain... interest to you if you manufacture (including import), process, or distribute chemical substances or...

  9. Explosives mimic for testing, training, and monitoring

    DOEpatents

    Reynolds, John G.; Durban, Matthew M.; Gash, Alexander E.; Grapes, Michael D.; Kelley, Ryan S.; Sullivan, Kyle T.

    2018-02-13

    Additive Manufacturing (AM) is used to make mimics for explosives. The process uses mixtures of explosives and matrices commonly used in AM. The explosives are formulated into a mixture with the matrix and printed using AM techniques and equipment. The explosive concentrations are kept less than 10% by wt. of the mixture to conform to requirements of shipping and handling.

  10. A molecular dynamics simulation study of dynamic process and mesoscopic structure in liquid mixture systems

    NASA Astrophysics Data System (ADS)

    Yang, Peng

    The focus of this dissertation is the Molecular Dynamics (MD) simulation study of two different systems. In thefirst system, we study the dynamic process of graphene exfoliation, particularly graphene dispersion using ionic surfactants (Chapter 2). In the second system, we investigate the mesoscopic structure of binary solute/ionic liquid (IL) mixtures through the comparison between simulations and corresponding experiments (Chapter 3 and 4). In the graphene exfoliation study, we consider two separation mechanisms: changing the interlayer distance and sliding away the relative distance of two single-layer graphene sheets. By calculating the energy barrier as a function of separation (interlayer or sliding-away) distance and performing sodium dodecyl sulfate (SDS) structure analysis around graphene surface in SDS surfactant/water + bilayer graphene mixture systems, we find that the sliding-away mechanism is the dominant, feasible separation process. In this process, the SDS-graphene interaction gradually replaces the graphene-graphene Van der Waals (VdW) interaction, and decreases the energy barrier until almost zero at critical SDS concentration. In solute/IL study, we investigate nonpolar (CS2) and dipolar (CH 3CN) solute/IL mixture systems. MD simulation shows that at low concentrations, IL is nanosegregated into an ionic network and nonpolar domain. It is also found that CS2 molecules tend to be localized into the nonpolar domain, while CH3CN interacts with nonpolar domain as well as with the charged head groups in the ionic network because of its amphiphilicity. At high concentrations, CH3CN molecules eventually disrupt the nanostructural organization. This dissertation is organized in four chapters: (1) introduction to graphene, ionic liquids and the methodology of MD; (2) MD simulation of graphene exfoliation; (3) Nanostructural organization in acetonitrile/IL mixtures; (4) Nanostructural organization in carbon disulfide/IL mixtures; (5) Conclusions. Results of MD simulations of liquid mixture systems car-ried out in this research explain observed experiments and show the details of nanostructural organizations in small solute molecules/IL mixture. Additionally, the research successfully reveals the correct mechanism of graphene exfoliation process in liquid solution. (This will be summarized in Chapter 5.) The research presented in this dissertation enhances our understanding of the microscopic behaviors in complex liquid systems as well as the theoretical method to explore them.

  11. Discovering temporal patterns in water quality time series, focusing on floods with the LDA method

    NASA Astrophysics Data System (ADS)

    Hélène Aubert, Alice; Tavenard, Romain; Emonet, Rémi; Malinowski, Simon; Guyet, Thomas; Quiniou, René; Odobez, Jean-Marc; Gascuel-Odoux, Chantal

    2013-04-01

    Studying floods has been a major issue in hydrological research for years. It is often done in terms of water quantity but it is also of interest in terms of water quality. Stream chemistry is a mix of solutes. They originate from various sources in the catchment, reach the stream by various flow pathways and are transformed by biogeochemical reactions at different locations. Therefore, we hypothesized that reaction of the stream chemistry to a rainfall event is not unique but varies according to the season (1), and the global meteorological conditions of the year (2). Identifying a typology of temporal chemical patterns of reaction to a rainfall event is a way to better understand catchment processes at the flood time scale. To answer this issue, we applied a probabilistic model (Latent Dirichlet Allocation or LDA (3)) mining recurrent sequential patterns to a dataset of floods. The dataset is 12 years long and daily recorded. It gathers a broad range of parameters from which we selected rainfall, discharge, water table depth, temperature as well as nitrate, dissolved organic carbon, sulphate and chloride concentrations. It comes from a long-term hydrological observatory (AgrHys, western France) located at Kervidy-Naizin. A set of 472 floods was automatically extracted (4). From each flood, a document has been generated that is made of a set of "hydrological words". Each hydrological word corresponds to a measurement: it is a triplet made of the considered variable, the time at which the measurement is made (relative to the beginning of the flood), and its magnitude (that can be low, medium or high). The documents are used as input data to the LDA algorithm. LDA relies on spotting co-occurrences (as an alternative to the more traditional study of correlation) between words that appear within the flood documents. It has two nice properties that are its ability to easily deal with missing data and its additive property that allows a document to be seen as a mixture of several flood patterns. The output of LDA is a set of patterns that can easily be represented in graphics. These patterns correspond to typical reactions to rainfall events. The patterns themselves are carefully studied, as well as their repartition along the year and along the 12 years of the dataset. The novelties are fourfold. First, as a methodological point of view, we learn that hydrological data can be analyzed with this LDA model giving a typology of a multivariate chemical signature of floods. Second, we outline that chemistry parameters are sufficient to obtain meaningful patterns. There is no need to include hydro-meteorological parameters to define the patterns. However, hydro-meteorological parameters are useful to understand the processes leading to these patterns. Third, our hypothesis of seasonal specific reaction to rainfall is verified, moreover detailed; so is our hypothesis of different reactions to rainfall for years with different hydro-meteorological conditions. Fourth, this method allows the consideration of overlapping floods that are usually not studied. We would recommend the use of such model to study chemical reactions of stream after rainfall events, or more broadly after any hydrological events. The typology that has been provided by this method is a kind of bar code of water chemistry during floods. It could be well suited to compare different geographical locations by using the same patterns and analysing the resulting different pattern distributions. (1) Aubert, A.H. et al., 2012. The chemical signature of a livestock farming catchment: synthesis from a high-frequency multi-element long term monitoring. HESSD, 9(8): 9715 - 9741. (2) Aubert, A.H., Gascuel-Odoux, C., Merot, P., 2013. Annual hysteresis of water quality: A method to analyse the effect of intra- and inter-annual climatic conditions. Journal of Hydrology, 478(0): 29-39. (3) Blei, D. M.; Ng, A. Y.; Jordan, M. I., 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(4-5): 993-1022. (4) de Lavenne, A., Cudennec, C., Streamflow velocity estimation in GIUH-type approach: what can neighbouring basins tell us? Poster Presentation - EGU General Assembly, 22-27 April 2012, Vienna, Austria.

  12. Inflammable Gas Mixture Detection with a Single Catalytic Sensor Based on the Electric Field Effect

    PubMed Central

    Tong, Ziyuan; Tong, Min-Ming; Meng, Wen; Li, Meng

    2014-01-01

    This paper introduces a new way to analyze mixtures of inflammable gases with a single catalytic sensor. The analysis technology was based on a new finding that an electric field on the catalytic sensor can change the output sensitivity of the sensor. The analysis of mixed inflammable gases results from processing the output signals obtained by adjusting the electric field parameter of the catalytic sensor. For the signal process, we designed a group of equations based on the heat balance of catalytic sensor expressing the relationship between the output signals and the concentration of gases. With these equations and the outputs of different electric fields, the gas concentration in a mixture could be calculated. In experiments, a mixture of methane, butane and ethane was analyzed by this new method, and the results showed that the concentration of each gas in the mixture could be detected with a single catalytic sensor, and the maximum relative error was less than 5%. PMID:24717635

  13. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  14. Melting and solidification characteristics of a mixture of two types of latent heat storage material in a vessel

    NASA Astrophysics Data System (ADS)

    Yu, JikSu; Horibe, Akihiko; Haruki, Naoto; Machida, Akito; Kato, Masashi

    2016-11-01

    In this study, we investigated the fundamental melting and solidification characteristics of mannitol, erythritol, and their mixture (70 % by mass mannitol: 30 % by mass erythritol) as potential phase-change materials (PCMs) for latent heat thermal energy storage systems, specifically those pertaining to industrial waste heat, having temperatures in the range of 100-250 °C. The melting point of erythritol and mannitol, the melting peak temperature of their mixture, and latent heat were measured using differential scanning calorimetry. The thermal performance of the mannitol mixture was determined during melting and solidification processes, using a heat storage vessel with a pipe heat exchanger. Our results indicated phase-change (fusion) temperatures of 160 °C for mannitol and 113 and 150 °C for the mannitol mixture. Nondimensional correlation equations of the average heat transfer during the solidification process, as well as the temperature and velocity efficiencies of flowing silicon oil in the pipe and the phase-change material (PCM), were derived using several nondimensional parameters.

  15. Photooxidation and Microbial Processing of Ancient and Modern Dissolved Organic Carbon in the Kolyma River, Siberia.

    NASA Astrophysics Data System (ADS)

    Behnke, M. I.; Mann, P. J.; Schade, J. D.; Spawn, S.; Zimov, N.

    2015-12-01

    Permafrost soils in northern high latitudes store large quantities of organic carbon that have remained frozen for thousands of years. As global temperatures increase, permafrost deposits have begun to thaw, releasing previously stored ancient carbon to streams and rivers in the form of dissolved organic carbon (DOC). Newly mobilized DOC is then subjected to processing by photooxidation and microbial metabolism. Permafrost-derived DOC is highly bioavailable directly upon release relative to modern DOC derived from plants and surface active layer soils. Our objectives were to assess the interaction of photodegradation and microbial processing, and to quantify any light priming effect on the microbial consumption of both ancient and modern sourced DOC pools. We exposed sterilized mixtures of ancient and modern DOC to ambient sunlight for six days, and then inoculated mixtures (0, 1, 10, 25, 50 & 100% ancient DOC) with microbes from both modern and ancient water sources. After inoculation, samples were incubated in the dark for five days. We measured biological oxygen demand, changes in absorbance, and DOC concentrations to quantify microbial consumption of DOC and identify shifts in DOC composition and biolability. We found evidence of photobleaching during irradiation (decreasing S275-295, increasing slope ratio, and decreasing SUVA254). Once inoculated, mixtures with more ancient DOC showed initially increased microbial respiration compared to mixtures with primarily modern DOC. During the first 24 hours, the light-exposed mixture with 50% ancient DOC showed 47.6% more oxygen consumption than did the dark 50% mixture, while the purely modern DOC showed 11.5% greater oxygen consumption after light exposure. After 5 days, the modern light priming was comparable to the 50% mixture (31.2% compared to 20.5%, respectively). Our results indicate that natural photoexposure of both modern and newly released DOC increases microbial processing rates over non photo-exposed DOC.

  16. A traceable reference for direct comparative assessment of total naphthenic acid concentrations in commercial and acid extractable organic mixtures derived from oil sands process water.

    PubMed

    Brunswick, Pamela; Hewitt, L Mark; Frank, Richard A; Kim, Marcus; van Aggelen, Graham; Shang, Dayue

    2017-02-23

    The advantage of using naphthenic acid (NA) mixtures for the determination of total NA lies in their chemical characteristics and identification of retention times distinct from isobaric interferences. However, the differing homolog profiles and unknown chemical structures of NA mixtures do not allow them to be considered a traceable reference material. The current study provides a new tool for the comparative assessment of different NA mixtures by direct reference to a single, well-defined and traceable compound, decanoic-d 19 acid. The method employed an established liquid chromatography time-of-flight mass spectrometry (LC/QToF) procedure that was applicable both to the classic O2 NA species dominating commercial mixtures and additionally to the O4 species known to be present in acid extractable organics (AEOs) derived from oil sands process water (OSPW). Four different commercial NA mixtures and one OSPW-derived AEOs mixture were comparatively assessed. Results showed significant difference among Merichem Technical, Aldrich, Acros, and Kodak commercial NA mixtures with respect to "equivalent to decanoic-d 19 acid" concentration ratios to nominal. Furthermore, different lot numbers of single commercial NA mixtures were found to be inconsistent with respect to their homolog content by percent response. Differences in the observed homolog content varied significantly, particularly at the lower (n = 9-14) and higher (n = 20-23) carbon number ranges. Results highlighted the problem between using NA mixtures from different sources and different lot numbers but offered a solution to the problem from a concentration perspective. It is anticipated that this tool may be utilized in review of historical data in addition to future studies, such as the study of OSPW derived acid extractable organics (AEOs) and fractions employed during toxicological studies.

  17. A practical guide to big data research in psychology.

    PubMed

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Output Feedback-Based Boundary Control of Uncertain Coupled Semilinear Parabolic PDE Using Neurodynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    In this paper, neurodynamic programming-based output feedback boundary control of distributed parameter systems governed by uncertain coupled semilinear parabolic partial differential equations (PDEs) under Neumann or Dirichlet boundary control conditions is introduced. First, Hamilton-Jacobi-Bellman (HJB) equation is formulated in the original PDE domain and the optimal control policy is derived using the value functional as the solution of the HJB equation. Subsequently, a novel observer is developed to estimate the system states given the uncertain nonlinearity in PDE dynamics and measured outputs. Consequently, the suboptimal boundary control policy is obtained by forward-in-time estimation of the value functional using a neural network (NN)-based online approximator and estimated state vector obtained from the NN observer. Novel adaptive tuning laws in continuous time are proposed for learning the value functional online to satisfy the HJB equation along system trajectories while ensuring the closed-loop stability. Local uniformly ultimate boundedness of the closed-loop system is verified by using Lyapunov theory. The performance of the proposed controller is verified via simulation on an unstable coupled diffusion reaction process.

  19. Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation

    PubMed Central

    Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2011-01-01

    We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977

  20. Latent topic discovery of clinical concepts from hospital discharge summaries of a heterogeneous patient cohort.

    PubMed

    Lehman, Li-Wei; Long, William; Saeed, Mohammed; Mark, Roger

    2014-01-01

    Patients in critical care often exhibit complex disease patterns. A fundamental challenge in clinical research is to identify clinical features that may be characteristic of adverse patient outcomes. In this work, we propose a data-driven approach for phenotype discovery of patients in critical care. We used Hierarchical Dirichlet Process (HDP) as a non-parametric topic modeling technique to automatically discover the latent "topic" structure of diseases, symptoms, and findings documented in hospital discharge summaries. We show that the latent topic structure can be used to reveal phenotypic patterns of diseases and symptoms shared across subgroups of a patient cohort, and may contain prognostic value in stratifying patients' post hospital discharge mortality risks. Using discharge summaries of a large patient cohort from the MIMIC II database, we evaluate the clinical utility of the discovered topic structure in identifying patients who are at high risk of mortality within one year post hospital discharge. We demonstrate that the learned topic structure has statistically significant associations with mortality post hospital discharge, and may provide valuable insights in defining new feature sets for predicting patient outcomes.

  1. Tunable integration of absorption-membrane-adsorption for efficiently separating low boiling gas mixtures near normal temperature

    PubMed Central

    Liu, Huang; Pan, Yong; Liu, Bei; Sun, Changyu; Guo, Ping; Gao, Xueteng; Yang, Lanying; Ma, Qinglan; Chen, Guangjin

    2016-01-01

    Separation of low boiling gas mixtures is widely concerned in process industries. Now their separations heavily rely upon energy-intensive cryogenic processes. Here, we report a pseudo-absorption process for separating low boiling gas mixtures near normal temperature. In this process, absorption-membrane-adsorption is integrated by suspending suitable porous ZIF material in suitable solvent and forming selectively permeable liquid membrane around ZIF particles. Green solvents like water and glycol were used to form ZIF-8 slurry and tune the permeability of liquid membrane surrounding ZIF-8 particles. We found glycol molecules form tighter membrane while water molecules form looser membrane because of the hydrophobicity of ZIF-8. When using mixing solvents composed of glycol and water, the permeability of liquid membrane becomes tunable. It is shown that ZIF-8/water slurry always manifests remarkable higher separation selectivity than solid ZIF-8 and it could be tuned to further enhance the capture of light hydrocarbons by adding suitable quantity of glycol to water. Because of its lower viscosity and higher sorption/desorption rate, tunable ZIF-8/water-glycol slurry could be readily used as liquid absorbent to separate different kinds of low boiling gas mixtures by applying a multistage separation process in one traditional absorption tower, especially for the capture of light hydrocarbons. PMID:26892255

  2. Crystallization-induced dynamic resolution R-epimer from 25-OCH3-PPD epimeric mixture.

    PubMed

    Zhang, Sainan; Tang, Yun; Cao, Jiaqing; Zhao, Chen; Zhao, Yuqing

    2015-11-15

    25-OCH3-PPD is a promising antitumor dammarane sapogenin isolated from the total saponin-hydrolyzed extract of Panax ginseng berry and Panax notoginseng leaves. 20(R)-25-OCH3-PPD was more potent as an anti-cancer agent than 20(S)-25-OCH3-PPD and epimeric mixture of 25-OCH3-PPD. This paper describes the rapid separation process of the R-epimer of 25-OCH3-PPD from its epimeric mixture by crystallization-induced dynamic resolution (CIDR). The optimized CIDR process was based on single factor analysis and nine well-planned orthogonal design experiments (OA9 matrix). A rapid and sensitive reverse phase high-performance liquid chromatographic (HPLC) method with evaporative light-scattering detector (ELSD) was developed and validated for the quantitation of 25-OCH3-PPD epimeric mixture and crystalline product. Separation and quantitation were achieved with a silica column using a mobile phase consisting of methanol and water (87:13, v/v) at a flow rate of 1.0mL/min. The ELSD detection was performed at 50°C and 3L/min. Under conditions involving 3mL of 95% ethanol, 8% HCl, and a hermetically sealed environment for 72h, the maximum production of 25(R)-OCH3-PPD was achieved with a chemical purity of 97% and a total yield of 87% through the CIDR process. The 25(R)-OCH3-PPD was nearly completely separated from the 220mg 25-OCH3-PPD epimeric mixture. Overall, a simple and steady small-batch purification process for the large-scale production of 25(R)-OCH3-PPD from 25-OCH3-PPD epimeric mixture was developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Integration of chemical and toxicological tools to assess the bioavailability of copper derived from different copper-based fungicides in soil.

    PubMed

    Wang, Quan-Ying; Sun, Jing-Yue; Xu, Xing-Jian; Yu, Hong-Wen

    2018-06-20

    Because the extensive use of Cu-based fungicides, the accumulation of Cu in agricultural soil has been widely reported. However, little information is known about the bioavailability of Cu deriving from different fungicides in soil. This paper investigated both the distribution behaviors of Cu from two commonly used fungicides (Bordeaux mixture and copper oxychloride) during the aging process and the toxicological effects of Cu on earthworms. Copper nitrate was selected as a comparison during the aging process. The distribution process of exogenous Cu into different soil fractions involved an initial rapid retention (the first 8 weeks) and a following slow continuous retention. Moreover, Cu mainly moved from exchangeable and carbonate fractions to Fe-Mn oxides-combined fraction during the aging process. The Elovich model fit well with the available Cu aging process, and the transformation rate was in the order of Cu(NO 3 ) 2 > Bordeaux mixture > copper oxychloride. On the other hand, the biological responses of earthworms showed that catalase activities and malondialdehyde contents of the copper oxychloride treated earthworms were significantly higher than those of Bordeaux mixture treated earthworms. Also, body Cu loads of earthworms from different Cu compounds spiked soils were in the following order: copper oxychloride > Bordeaux mixture. Thus, the bioavailability of Cu from copper oxychloride in soil was significantly higher than that of Bordeaux mixture, and different Cu compounds should be taken into consideration when studying the bioavailability of Cu-based fungicides in the soil. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures.

    PubMed

    Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-05

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. FLUORINATION PROCESS

    DOEpatents

    McMillan, T.S.

    1957-10-29

    A process for the fluorination of uranium metal is described. It is known that uranium will react with liquid chlorine trifluoride but the reaction proceeds at a slow rate. However, a mixture of a halogen trifluoride together with hydrogen fluoride reacts with uranium at a significantly faster rate than does a halogen trifluoride alone. Bromine trifluoride is suitable for use in the process, but chlorine trifluoride is preferred. Particularly suitable is a mixture of ClF/sub 3/ and HF having a mole ratio (moles

  6. Catalysts and process for liquid hydrocarbon fuel production

    DOEpatents

    White, Mark G.; Ranaweera, Samantha A.; Henry, William P.

    2016-08-02

    The present invention provides a novel process and system in which a mixture of carbon monoxide and hydrogen synthesis gas, or syngas, is converted into hydrocarbon mixtures composed of high quality distillates, gasoline components, and lower molecular weight gaseous olefins in one reactor or step. The invention utilizes a novel supported bimetallic ion complex catalyst for conversion, and provides methods of preparing such novel catalysts and use of the novel catalysts in the process and system of the invention.

  7. 33 CFR 155.420 - Pumping, piping and discharge requirements for oceangoing ships of 100 gross tons and above but...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ship that has approved oily-water separating equipment for the processing of oily mixtures from bilges... least one pump installed to discharge oily mixtures through a fixed piping system to a reception... to stop each pump that is used to discharge oily mixtures; and (6) The ship has a stop valve...

  8. 33 CFR 155.420 - Pumping, piping and discharge requirements for oceangoing ships of 100 gross tons and above but...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ship that has approved oily-water separating equipment for the processing of oily mixtures from bilges... least one pump installed to discharge oily mixtures through a fixed piping system to a reception... to stop each pump that is used to discharge oily mixtures; and (6) The ship has a stop valve...

  9. 33 CFR 155.420 - Pumping, piping and discharge requirements for oceangoing ships of 100 gross tons and above but...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ship that has approved oily-water separating equipment for the processing of oily mixtures from bilges... least one pump installed to discharge oily mixtures through a fixed piping system to a reception... to stop each pump that is used to discharge oily mixtures; and (6) The ship has a stop valve...

  10. 33 CFR 155.350 - Oily mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Oily mixture (bilge slops)/fuel... mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less than 400 gross... to a reception facility; or (2) Has approved oily-water separating equipment for processing oily...

  11. 33 CFR 155.350 - Oily mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Oily mixture (bilge slops)/fuel... mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less than 400 gross... to a reception facility; or (2) Has approved oily-water separating equipment for processing oily...

  12. 33 CFR 155.350 - Oily mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Oily mixture (bilge slops)/fuel... mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less than 400 gross... to a reception facility; or (2) Has approved oily-water separating equipment for processing oily...

  13. 33 CFR 155.350 - Oily mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Oily mixture (bilge slops)/fuel... mixture (bilge slops)/fuel oil tank ballast water discharges on oceangoing ships of less than 400 gross... to a reception facility; or (2) Has approved oily-water separating equipment for processing oily...

  14. 33 CFR 155.420 - Pumping, piping and discharge requirements for oceangoing ships of 100 gross tons and above but...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ship that has approved oily-water separating equipment for the processing of oily mixtures from bilges... least one pump installed to discharge oily mixtures through a fixed piping system to a reception... to stop each pump that is used to discharge oily mixtures; and (6) The ship has a stop valve...

  15. Thermo-Chemical Conversion of Microwave Activated Biomass Mixtures

    NASA Astrophysics Data System (ADS)

    Barmina, I.; Kolmickovs, A.; Valdmanis, R.; Vostrikovs, S.; Zake, M.

    2018-05-01

    Thermo-chemical conversion of microwave activated wheat straw mixtures with wood or peat pellets is studied experimentally with the aim to provide more effective application of wheat straw for heat energy production. Microwave pre-processing of straw pellets is used to provide a partial decomposition of the main constituents of straw and to activate the thermo-chemical conversion of wheat straw mixtures with wood or peat pellets. The experimental study includes complex measurements of the elemental composition of biomass pellets (wheat straw, wood, peat), DTG analysis of their thermal degradation, FTIR analysis of the composition of combustible volatiles entering the combustor, the flame temperature, the heat output of the device and composition of the products by comparing these characteristics for mixtures with unprocessed and mw pre-treated straw pellets. The results of experimental study confirm that mw pre-processing of straw activates the thermal decomposition of mixtures providing enhanced formation of combustible volatiles. This leads to improvement of the combustion conditions in the flame reaction zone, completing thus the combustion of volatiles, increasing the flame temperature, the heat output from the device, the produced heat energy per mass of burned mixture and decreasing at the same time the mass fraction of unburned volatiles in the products.

  16. The use of gaseous fuels mixtures for SI engines propulsion

    NASA Astrophysics Data System (ADS)

    Flekiewicz, M.; Kubica, G.

    2016-09-01

    Paper presents results of SI engine tests, carried on for different gaseous fuels. Carried out analysis made it possible to define correlation between fuel composition and engine operating parameters. Tests covered various gaseous mixtures: of methane and hydrogen and LPG with DME featuring different shares. The first group, considered as low carbon content fuels can be characterized by low CO2 emissions. Flammability of hydrogen added in those mixtures realizes the function of combustion process activator. That is why hydrogen addition improves the energy conversion by about 3%. The second group of fuels is constituted by LPG and DME mixtures. DME mixes perfectly with LPG, and differently than in case of other hydrocarbon fuels consists also of oxygen makes the stoichiometric mixture less oxygen demanding. In case of this fuel an improvement in engine volumetric and overall engine efficiency has been noticed, when compared to LPG. For the 11% DME share in the mixture an improvement of 2% in the efficiency has been noticed. During the tests standard CNG/LPG feeding systems have been used, what underlines utility value of the research. The stand tests results have been followed by combustion process simulation including exhaust forming and charge exchange.

  17. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  18. Development of a Process for the Spinning of Synthetic Spider Silk

    DOE PAGES

    Copeland, Cameron G.; Bell, Brianne E.; Christensen, Chad D.; ...

    2015-06-05

    Spider silks have unique mechanical properties but current efforts to duplicate those properties with recombinant proteins have been unsuccessful. Here, this study was designed to develop a single process to spin fibers with excellent and consistent mechanical properties. As-spun fibers produced were brittle, but by stretching the fibers the mechanical properties were greatly improved. A water-dip or water-stretch further increased the strength and elongation of the synthetic spider silk fibers. Given the promising results of the water stretch, a mechanical double-stretch system was developed. Both a methanol/water mixture and an isopropanol/water mixture were independently used to stretch the fibers withmore » this system. We found that the methanol mixture produced fibers with high tensile strength while the isopropanol mixture produced fibers with high elongation.« less

  19. Premixed flame propagation in combustible particle cloud mixtures

    NASA Technical Reports Server (NTRS)

    Seshadri, K.; Yang, B.

    1993-01-01

    The structures of premixed flames propagating in combustible systems, containing uniformly distributed volatile fuel particles, in an oxidizing gas mixtures is analyzed. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi(u) is substantially larger than unity. A model is developed to explain these experimental observations. In the model it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. It is shown that the interplay of vaporization kinetics and oxidation process, can result in steady flame propagation in combustible mixtures where the value of phi(u) is substantially larger than unity. This prediction is in agreement with experimental observations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khinkis, Mark J.; Kozlov, Aleksandr P.

    A radiant, non-catalytic recuperative reformer has a flue gas flow path for conducting hot exhaust gas from a thermal process and a reforming mixture flow path for conducting a reforming mixture. At least a portion of the reforming mixture flow path is positioned adjacent to the flue gas flow path to permit heat transfer from the hot exhaust gas to the reforming mixture. The reforming mixture flow path contains substantially no material commonly used as a catalyst for reforming hydrocarbon fuel (e.g., nickel oxide, platinum group elements or rhenium), but instead the reforming mixture is reformed into a higher calorificmore » fuel via reactions due to the heat transfer and residence time. In a preferred embodiment, a portion of the reforming mixture flow path is positioned outside of flue gas flow path for a relatively large residence time.« less

  1. Non-catalytic recuperative reformer

    DOEpatents

    Khinkis, Mark J.; Kozlov, Aleksandr P.; Kurek, Harry

    2015-12-22

    A non-catalytic recuperative reformer has a flue gas flow path for conducting hot flue gas from a thermal process and a reforming mixture flow path for conducting a reforming mixture. At least a portion of the reforming mixture flow path is embedded in the flue gas flow path to permit heat transfer from the hot flue gas to the reforming mixture. The reforming mixture flow path contains substantially no material commonly used as a catalyst for reforming hydrocarbon fuel (e.g., nickel oxide, platinum group elements or rhenium), but instead the reforming mixture is reformed into a higher calorific fuel via reactions due to the heat transfer and residence time. In a preferred embodiment, extended surfaces of metal material such as stainless steel or metal alloy that are high in nickel content are included within at least a portion of the reforming mixture flow path.

  2. Characterization of composting mixtures and compost of rabbit by-products to obtain a quality product and plant proposal for industrial production.

    PubMed

    Bianchi, Biagio; Papajova, Ingrid; Tamborrino, Rosanna; Ventrella, Domenico; Vitti, Carolina

    2015-01-01

    In this study we have observed the effects of using rabbit manure and slaughtering by-products in a composting process. Three piles of this material, 4700 kg each, with different amount and C/N ratio, have been investigated and experimental tests were carried out in an industrial horizontal axe reactor using a prototype of turning machine. The composting time lasted 85 days; 2 experimental cycles were conducted: one in Winter and one in Summer. In the Winter test, mesophilic reaction started only in the control mixture (animal manure + slaughtering by-products without straw). It is noteworthy that, the 3 investigated mixtures produced soil amendment by compost with good agronomical potential but with parameters close to the extreme limits of the law. In the Summer test, there was thermophilic fermentation in all mixtures and a better quality compost was obtained, meeting all the agronomic and legislative constraints. For each pile, we examined the progression of fermentation process and thus the plant limitations that did not allow a correct composting process. The results obtained in this study are useful for the development of appropriate mixtures, machines, and plants assuring continuance and reliability in the composting of the biomass coming from rabbit industry.

  3. Terahertz acoustic phonon detection from a compact surface layer of spherical nanoparticles powder mixture of aluminum, alumina and multi-walled carbon nanotube

    NASA Astrophysics Data System (ADS)

    Abouelsayed, A.; Ebrahim, M. R.; El hotaby, W.; Hassan, S. A.; Al-Ashkar, Emad

    2017-10-01

    We present terahertz spectroscopy study on spherical nanoparticles powder mixture of aluminum, alumina, and MWCNTs induced by surface mechanical attrition treatment (SMAT) of aluminum substrates. Surface alloying of AL, Al2O3 0.95% and MWCNTs 0.05% powder mixture was produced during SMAT process, where a compact surface layer of about 200 μm due to ball bombardment was produced from the mixture. Al2O3 alumina powder played a significant role in MWCNTs distribution on surface, those were held in deformation surface cites of micro-cavities due to SMAT process of Al. The benefits are the effects on resulted optical properties of the surface studied at the terahertz frequency range due to electrical isolation confinement effects and electronic resonance disturbances exerted on Al electronic resonance at the same range of frequencies. THz acoustic phonon around 0.53-0.6 THz (17-20 cm-1) were observed at ambient conditions for the spherical nanoparticles powder mixture of Al, Al2O3 and MWCNTs. These results suggested that the presence of Al2O3 and MWCNTs during SMAT process leads to the optically detection of such acoustic phonon in the THz frequency range.

  4. Method for converting uranium oxides to uranium metal

    DOEpatents

    Duerksen, Walter K.

    1988-01-01

    A process is described for converting scrap and waste uranium oxide to uranium metal. The uranium oxide is sequentially reduced with a suitable reducing agent to a mixture of uranium metal and oxide products. The uranium metal is then converted to uranium hydride and the uranium hydride-containing mixture is then cooled to a temperature less than -100.degree. C. in an inert liquid which renders the uranium hydride ferromagnetic. The uranium hydride is then magnetically separated from the cooled mixture. The separated uranium hydride is readily converted to uranium metal by heating in an inert atmosphere. This process is environmentally acceptable and eliminates the use of hydrogen fluoride as well as the explosive conditions encountered in the previously employed bomb-reduction processes utilized for converting uranium oxides to uranium metal.

  5. Smooth operator: The effects of different 3D mesh retriangulation protocols on the computation of Dirichlet normal energy.

    PubMed

    Spradley, Jackson P; Pampush, James D; Morse, Paul E; Kay, Richard F

    2017-05-01

    Dirichlet normal energy (DNE) is a metric of surface topography that has been used to evaluate the relationship between the surface complexity of primate cheek teeth and dietary categories. This study examines the effects of different 3D mesh retriangulation protocols on DNE. We examine how different protocols influence the DNE of a simple geometric shape-a hemisphere-to gain a more thorough understanding than can be achieved by investigating a complex biological surface such as a tooth crown. We calculate DNE on 3D surface meshes of hemispheres and on primate molars subjected to various retriangulation protocols, including smoothing algorithms, smoothing amounts, target face counts, and criteria for boundary face exclusion. Software used includes R, MorphoTester, Avizo, and MeshLab. DNE was calculated using the R package "molaR." In all cases, smoothing as performed in Avizo sharply decreases DNE initially, after which DNE becomes stable. Using a broader boundary exclusion criterion or performing additional smoothing (using "mesh fairing" methods) further decreases DNE. Increasing the mesh face count also results in increased DNE on tooth surfaces. Different retriangulation protocols yield different DNE values for the same surfaces, and should not be combined in meta-analyses. Increasing face count will capture surface microfeatures, but at the expense of computational speed. More aggressive smoothing is more likely to alter the essential geometry of the surface. A protocol is proposed that limits potential artifacts created during surface production while preserving pertinent features on the occlusal surface. © 2017 Wiley Periodicals, Inc.

  6. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  7. Electrochemical separation of hydrogen from reformate using PEM fuel cell technology

    NASA Astrophysics Data System (ADS)

    Gardner, C. L.; Ternan, M.

    This article is an examination of the feasibility of electrochemically separating hydrogen obtained by steam reforming a hydrocarbon or alcohol source. A potential advantage of this process is that the carbon dioxide rich exhaust stream should be able to be captured and stored thereby reducing greenhouse gas emissions. Results are presented for the performance of the anode of proton exchange membrane (PEM) electrochemical cell for the separation of hydrogen from a H 2-CO 2 gas mixture and from a H 2-CO 2-CO gas mixture. Experiments were carried out using a single cell state-of-the-art PEM fuel cell. The anode was fed with either a H 2-CO 2 gas mixture or a H 2-CO 2-CO gas mixture and hydrogen was evolved at the cathode. All experiments were performed at room temperature and atmospheric pressure. With the H 2-CO 2 gas mixture the hydrogen extraction efficiency is quite high. When the gas mixture included CO, however, the hydrogen extraction efficiency is relatively poor. To improve the efficiency for the separation of the gas mixture containing CO, the effect of periodic pulsing on the anode potential was examined. Results show that pulsing can substantially reduce the anode potential thereby improving the overall efficiency of the separation process although the anode potential of the CO poisoned and pulsed cell still lies above that of an unpoisoned cell.

  8. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    PubMed

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)

  9. How Is the Freezing Point of a Binary Mixture of Liquids Related to the Composition? A Guided Inquiry Experiment

    ERIC Educational Resources Information Center

    Hunnicutt, Sally S.; Grushow, Alexander; Whitnell, Rob

    2017-01-01

    The principles of process-oriented guided inquiry learning (POGIL) are applied to a binary solid-liquid mixtures experiment. Over the course of two learning cycles, students predict, measure, and model the phase diagram of a mixture of fatty acids. The enthalpy of fusion of each fatty acid is determined from the results. This guided inquiry…

  10. 33 CFR 158.220 - Ports and terminals loading more than 1,000 metric tons of oil other than crude oil or bunker oil.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...

  11. 33 CFR 158.220 - Ports and terminals loading more than 1,000 metric tons of oil other than crude oil or bunker oil.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...

  12. 33 CFR 158.220 - Ports and terminals loading more than 1,000 metric tons of oil other than crude oil or bunker oil.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...

  13. 33 CFR 158.220 - Ports and terminals loading more than 1,000 metric tons of oil other than crude oil or bunker oil.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...

  14. 33 CFR 158.220 - Ports and terminals loading more than 1,000 metric tons of oil other than crude oil or bunker oil.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... processing in the amount of 10 metric tons (11 short tons); (b) Bilge water containing oily mixtures in the... average, whichever quantity is greater; (c) Ballast water containing oily mixtures in the amount of 30% of... FACILITIES FOR OIL, NOXIOUS LIQUID SUBSTANCES, AND GARBAGE Criteria for Reception Facilities: Oily Mixtures...

  15. Coal liquefaction process using pretreatment with a binary solvent mixture

    DOEpatents

    Miller, R.N.

    1986-10-14

    An improved process for thermal solvent refining or hydroliquefaction of non-anthracitic coal at elevated temperatures under hydrogen pressure in a hydrogen donor solvent comprises pretreating the coal with a binary mixture of an aromatic hydrocarbon and an aliphatic alcohol at a temperature below 300 C before the hydroliquefaction step. This treatment generally increases both conversion of coal and yields of oil. 1 fig.

  16. Coal liquefaction process using pretreatment with a binary solvent mixture

    DOEpatents

    Miller, Robert N.

    1986-01-01

    An improved process for thermal solvent refining or hydroliquefaction of non-anthracitic coal at elevated temperatures under hydrogen pressure in a hydrogen donor solvent comprises pretreating the coal with a binary mixture of an aromatic hydrocarbon and an aliphatic alcohol at a temperature below 300.degree. C. before the hydroliquefaction step. This treatment generally increases both conversion of coal and yields of oil.

  17. Mixture risk assessment: a case study of Monsanto experiences.

    PubMed

    Nair, R S; Dudek, B R; Grothe, D R; Johannsen, F R; Lamb, I C; Martens, M A; Sherman, J H; Stevens, M W

    1996-01-01

    Monsanto employs several pragmatic approaches for evaluating the toxicity of mixtures. These approaches are similar to those recommended by many national and international agencies. When conducting hazard and risk assessments, priority is always given to using data collected directly on the mixture of concern. To provide an example of the first tier of evaluation, actual data on acute respiratory irritation studies on mixtures were evaluated to determine whether the principle of additivity was applicable to the mixture evaluated. If actual data on the mixture are unavailable, extrapolation across similar mixtures is considered. Because many formulations are quite similar in composition, the toxicity data from one mixture can be extended to a closely related mixture in a scientifically justifiable manner. An example of a family of products where such extrapolations have been made is presented to exemplify this second approach. Lastly, if data on similar mixtures are unavailable, data on component fractions are used to predict the toxicity of the mixture. In this third approach, process knowledge and scientific judgement are used to determine how the known toxicological properties of the individual fractions affect toxicity of the mixture. Three examples of plant effluents where toxicological data on fractions were used to predict the toxicity of the mixture are discussed. The results of the analysis are used to discuss the predictive value of each of the above mentioned toxicological approaches for evaluating chemical mixtures.

  18. Impact of Adding Biopreparations on the Anaerobic Co-Digestion of Sewage Sludge with Grease Trap Waste

    NASA Astrophysics Data System (ADS)

    Worwąg, Małgorzata

    2016-09-01

    The aim of the study was to evaluate the effect of using biopreparations on efficiency of the co-fermentation process. Commercial bacterial biopreparations DBC Plus Type L, DBC Plus Type R5 and yeast biopreparations were used in the study. The process of cofermentation of sewage sludge with grease trap waste from a production plant that manufactured methyl esters of fatty acids was analysed in the laboratory environment under mesophilic conditions. The sludge in the reactor was replaced once a day, with hydraulic retention time of 10 days. Grease trap waste accounted for 35%wt. of the fermentation mixture. The stabilization process was monitored everyday based on the measurements of biogas volume. Addition of yeast biopreparation to methane fermentation of sewage sludge with grease trap waste caused an increase in mean daily biogas production from 6.9 dm3 (control mixture) to 9.21dm3 (mixture M3). No differences in biogas production were found for other cases (mixtures M1, M2). A similar relationship was observed for methane content in biogas.

  19. Constant-Pressure Combustion Charts Including Effects of Diluent Addition

    NASA Technical Reports Server (NTRS)

    Turner, L Richard; Bogart, Donald

    1949-01-01

    Charts are presented for the calculation of (a) the final temperatures and the temperature changes involved in constant-pressure combustion processes of air and in products of combustion of air and hydrocarbon fuels, and (b) the quantity of hydrocarbon fuels required in order to attain a specified combustion temperature when water, alcohol, water-alcohol mixtures, liquid ammonia, liquid carbon dioxide, liquid nitrogen, liquid oxygen, or their mixtures are added to air as diluents or refrigerants. The ideal combustion process and combustion with incomplete heat release from the primary fuel and from combustible diluents are considered. The effect of preheating the mixture of air and diluents and the effect of an initial water-vapor content in the combustion air on the required fuel quantity are also included. The charts are applicable only to processes in which the final mixture is leaner than stoichiometric and at temperatures where dissociation is unimportant. A chart is also included to permit the calculation of the stoichiometric ratio of hydrocarbon fuel to air with diluent addition. The use of the charts is illustrated by numerical examples.

  20. Evolution of process control parameters during extended co-composting of green waste and solid fraction of cattle slurry to obtain growing media.

    PubMed

    Cáceres, Rafaela; Coromina, Narcís; Malińska, Krystyna; Marfà, Oriol

    2015-03-01

    This study aimed to monitor process parameters when two by-products (green waste - GW, and the solid fraction of cattle slurry - SFCS) were composted to obtain growing media. Using compost in growing medium mixtures involves prolonged composting processes that can last at least half a year. It is therefore crucial to study the parameters that affect compost stability as measured in the field in order to shorten the composting process at composting facilities. Two mixtures were prepared: GW25 (25% GW and 75% SFCS, v/v) and GW75 (75% GW and 25% SFCS, v/v). The different raw mixtures resulted in the production of two different growing media, and the evolution of process management parameters was different. A new parameter has been proposed to deal with attaining the thermophilic temperature range and maintaining it during composting, not only it would be useful to optimize composting processes, but also to assess the hygienization degree. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Simulation of uranium and plutonium oxides compounds obtained in plasma

    NASA Astrophysics Data System (ADS)

    Novoselov, Ivan Yu.; Karengin, Alexander G.; Babaev, Renat G.

    2018-03-01

    The aim of this paper is to carry out thermodynamic simulation of mixed plutonium and uranium oxides compounds obtained after plasma treatment of plutonium and uranium nitrates and to determine optimal water-salt-organic mixture composition as well as conditions for their plasma treatment (temperature, air mass fraction). Authors conclude that it needs to complete the treatment of nitric solutions in form of water-salt-organic mixtures to guarantee energy saving obtainment of oxide compounds for mixed-oxide fuel and explain the choice of chemical composition of water-salt-organic mixture. It has been confirmed that temperature of 1200 °C is optimal to practice the process. Authors have demonstrated that condensed products after plasma treatment of water-salt-organic mixture contains targeted products (uranium and plutonium oxides) and gaseous products are environmental friendly. In conclusion basic operational modes for practicing the process are showed.

  2. Thermally induced processes in mixtures of aluminum with organic acids after plastic deformations under high pressure

    NASA Astrophysics Data System (ADS)

    Zhorin, V. A.; Kiselev, M. R.; Roldugin, V. I.

    2017-11-01

    DSC is used to measure the thermal effects of processes in mixtures of solid organic dibasic acids with powdered aluminum, subjected to plastic deformation under pressures in the range of 0.5-4.0 GPa using an anvil-type high-pressure setup. Analysis of thermograms obtained for the samples after plastic deformation suggests a correlation between the exothermal peaks observed around the temperatures of degradation of the acids and the thermally induced chemical reactions between products of acid degradation and freshly formed surfaces of aluminum particles. The release of heat in the mixtures begins at 30-40°C. The thermal effects in the mixtures of different acids change according to the order of acid reactivity in solutions. The extreme baric dependences of enthalpies of thermal effects are associated with the rearrangement of the electron subsystem of aluminum upon plastic deformation at high pressures.

  3. Rapid gas hydrate formation process

    DOEpatents

    Brown, Thomas D.; Taylor, Charles E.; Unione, Alfred J.

    2013-01-15

    The disclosure provides a method and apparatus for forming gas hydrates from a two-phase mixture of water and a hydrate forming gas. The two-phase mixture is created in a mixing zone which may be wholly included within the body of a spray nozzle. The two-phase mixture is subsequently sprayed into a reaction zone, where the reaction zone is under pressure and temperature conditions suitable for formation of the gas hydrate. The reaction zone pressure is less than the mixing zone pressure so that expansion of the hydrate-forming gas in the mixture provides a degree of cooling by the Joule-Thompson effect and provides more intimate mixing between the water and the hydrate-forming gas. The result of the process is the formation of gas hydrates continuously and with a greatly reduced induction time. An apparatus for conduct of the method is further provided.

  4. Chemosensitivity of the osphradium of the pond snail Lymnaea stagnalis

    PubMed

    Wedemeyer; Schild

    1995-01-01

    The osphradium of the pond snail Lymnaea stagnalis was studied to determine the stimuli to which this organ responds. The following stimuli were tested: hypoxia, hypercapnia, a mixture of amino acids, a mixture of citralva and amyl acetate and a mixture of lyral, lilial and ethylvanillin. The mean nerve activity consistently increased with elevated PCO2, whereas hypoxia produced variable effects. The nerve activity became rhythmic upon application of citralva and amyl acetate, but it increased in a non-rhythmic way upon application of the other two odorant mixtures tested. Whole-cell patch-clamp recordings were made from a group of 15 neurones that lay next to the issuing osphradial nerve, to determine whether ganglion cells were involved in olfactory signal processing. All neurones tested responded to at least one of the three mixtures of odorants. Both excitatory and inhibitory responses occurred. Our results indicate that the osphradium of the pond snail Lymnaea stagnalis is sensitive to elevated PCO2 as well as to three different classes of odorants. In addition, at least some neurones within the osphradium are involved in the processing of olfactory information.

  5. Process of making carbon-carbon composites

    NASA Technical Reports Server (NTRS)

    Kowbel, Witold (Inventor); Withers, James C. (Inventor); Bruce, Calvin (Inventor); Vaidyanathan, Ranji (Inventor); Loutfy, Raouf O. (Inventor)

    2000-01-01

    A carbon composite structure, for example, an automotive engine piston, is made by preparing a matrix including of a mixture of non crystalline carbon particulate soluble in an organic solvent and a binder that has a liquid phase. The non crystalline particulate also contains residual carbon hydrogen bonding. An uncured structure is formed by combining the matrix mixture, for example, carbon fibers such as graphite dispersed in the mixture and/or graphite cloth imbedded in the mixture. The uncured structure is cured by pyrolyzing it in an inert atmosphere such as argon. Advantageously, the graphite reinforcement material is whiskered prior to combining it with the matrix mixture by a novel method involving passing a gaseous metal suboxide over the graphite surface.

  6. Alternative process schemes for coal conversion. Progress report No. 1, October 1, 1978--January 31, 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sansone, M.J.

    1979-02-01

    On the basis of simple, first approximation calculations, it has been shown that catalytic gasification and hydrogasification are inherently superior to conventional gasification with respect to carbon utilization and thermal efficiency. However, most processes which are directed toward the production of substitute natural gas (SNG) by direct combination of coal with steam at low temperatures (catalytic processes) or with hydrogen (hydrogasification) will require a step for separation of product SNG from a recycle stream. The success or falure of the process could well depend upon the economics of this separation scheme. The energetics for the separation of mixtures of idealmore » gases has been considered in some detail. Minimum energies for complete separation of representative effluent mixtures have been calculated as well as energies for separation into product and recycle streams. The gas mixtures include binary systems of H/sub 2/ and CH/sub 4/ and ternary mixtures of H/sub 2/, CH/sub 4/, and CO. A brief summary of a number of different real separation schemes has also been included. We have arbitrarily divided these into five categories: liquefaction, absorption, adsorption, chemical, and diffusional methods. These separation methods will be screened and the more promising methods examined in more detail in later reports. Finally, a brief mention of alternative coal conversion processes concludes this report.« less

  7. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  8. Continious production of exfoliated graphite composite compositions and flow field plates

    DOEpatents

    Shi, Jinjun; Zhamu, Aruna; Jang, Bor Z.

    2010-07-20

    A process of continuously producing a more isotropic, electrically conductive composite composition is provided. The process comprises: (a) continuously supplying a compressible mixture comprising exfoliated graphite worms and a binder or matrix material, wherein the binder or matrix material is in an amount of between 3% and 60% by weight based on the total weight of the mixture; (b) continuously compressing the compressible mixture at a pressure within the range of from about 5 psi or 0.035 MPa to about 50,000 psi or 350 MPa in at least a first direction into a cohered graphite composite compact; and (c) continuously compressing the composite compact in a second direction, different from the first direction, to form the composite composition in a sheet or plate form. The process leads to composite plates with exceptionally high thickness-direction electrical conductivity.

  9. Uphill diffusion in multicomponent mixtures.

    PubMed

    Krishna, Rajamani

    2015-05-21

    Molecular diffusion is an omnipresent phenomena that is important in a wide variety of contexts in chemical, physical, and biological processes. In the majority of cases, the diffusion process can be adequately described by Fick's law that postulates a linear relationship between the flux of any species and its own concentration gradient. Most commonly, a component diffuses down the concentration gradient. The major objective of this review is to highlight a very wide variety of situations that cause the uphill transport of one constituent in the mixture. Uphill diffusion may occur in multicomponent mixtures in which the diffusion flux of any species is strongly coupled to that of its partner species. Such coupling effects often arise from strong thermodynamic non-idealities. For a quantitative description we need to use chemical potential gradients as driving forces. The transport of ionic species in aqueous solutions is coupled with its partner ions because of the electro-neutrality constraints; such constraints may accelerate or decelerate a specific ion. When uphill diffusion occurs, we observe transient overshoots during equilibration; the equilibration process follows serpentine trajectories in composition space. For mixtures of liquids, alloys, ceramics and glasses the serpentine trajectories could cause entry into meta-stable composition zones; such entry could result in phenomena such as spinodal decomposition, spontaneous emulsification, and the Ouzo effect. For distillation of multicomponent mixtures that form azeotropes, uphill diffusion may allow crossing of distillation boundaries that are normally forbidden. For mixture separations with microporous adsorbents, uphill diffusion can cause supra-equilibrium loadings to be achieved during transient uptake within crystals; this allows the possibility of over-riding adsorption equilibrium for achieving difficult separations.

  10. K-9 training aids made using additive manufacturing

    DOEpatents

    Reynolds, John G.; Durban, Matthew M.; Gash, Alexander E.; Grapes, Michael D.; Kelley, Ryan S.; Sullivan, Kyle T.

    2018-02-20

    Additive Manufacturing (AM) is used to make aids that target the training of K-9s to detect explosives. The process uses mixtures of explosives and matrices commonly used in AM. The explosives are formulated into a mixture with the matrix and printed using AM techniques and equipment. The explosive concentrations are kept less than 10% by wt. of the mixture to conform to requirements of shipping and handling.

  11. Process optimisation of microwave-assisted extraction of peony ( Paeonia suffruticosa Andr .) seed oil using hexane-ethanol mixture and its characterisation

    Treesearch

    Xiaoli Sun; Wengang Li; Jian Li; Yuangang Zu; Chung-Yun Hse; Jiulong Xie; Xiuhua Zhao

    2016-01-01

    Ethanol and hexane mixture agent microwave-assisted extraction (MAE) method was conducted to extract peony (Paeonia suffruticosa Andr.) seed oil (PSO). The aim of the study was to optimise the extraction for both yield and energy consumption in mixture agent MAE. The highest oil yield (34.49%) and lowest unit energy consumption (14 125.4 J g -1)...

  12. Experimental Equipment for Powder Processing

    DTIC Science & Technology

    2009-08-20

    for a series of alumina and zirconia powder mixtures by TMDAR, CR-15 (alumina), as well as TZ3YS and CERAC-2003 (zirconia). The proportion of TMDAR...is known to cause abnormal grain growth. Fig.15 shows the seven representative curves obtained for our zirconia powder system. The 10% and 20...various zirconia powder mixtures. The maximum densification rate for each of our zirconia powder mixtures occurs within the relative density range of

  13. Convergence of the flow of a chemically reacting gaseous mixture to incompressible Euler equations in a unbounded domain

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam

    2017-12-01

    The flow of chemically reacting gaseous mixture is associated with a variety of phenomena and processes. We study the combined quasineutral and inviscid limit from the flow of chemically reacting gaseous mixture governed by Poisson equation to incompressible Euler equations with the ill-prepared initial data in the unbounded domain R^2× T. Furthermore, the convergence rates are obtained.

  14. Hydrogen-Selective Membrane

    DOEpatents

    Collins, John P.; Way, J. Douglas

    1995-09-19

    A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 .mu.m but typically less than about 20 .mu.m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m.sup.2.s at a temperature of greater than about 500.degree. C. and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500.degree. C. and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400.degree. C. and less than about 1000.degree. C. before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process.

  15. Hydrogen-selective membrane

    DOEpatents

    Collins, J.P.; Way, J.D.

    1995-09-19

    A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 {micro}m but typically less than about 20 {micro}m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m{sup 2}s at a temperature of greater than about 500 C and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500 C and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400 C and less than about 1000 C before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process. 9 figs.

  16. Hydrogen-selective membrane

    DOEpatents

    Collins, J.P.; Way, J.D.

    1997-07-29

    A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 {micro}m but typically less than about 20 {micro}m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m{sup 2} s at a temperature of greater than about 500 C and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500 C and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400 C and less than about 1000 C before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process. 9 figs.

  17. Hydrogen-selective membrane

    DOEpatents

    Collins, John P.; Way, J. Douglas

    1997-01-01

    A hydrogen-selective membrane comprises a tubular porous ceramic support having a palladium metal layer deposited on an inside surface of the ceramic support. The thickness of the palladium layer is greater than about 10 .mu.m but typically less than about 20 .mu.m. The hydrogen permeation rate of the membrane is greater than about 1.0 moles/m.sup.2. s at a temperature of greater than about 500.degree. C. and a transmembrane pressure difference of about 1,500 kPa. Moreover, the hydrogen-to-nitrogen selectivity is greater than about 600 at a temperature of greater than about 500.degree. C. and a transmembrane pressure of about 700 kPa. Hydrogen can be separated from a mixture of gases using the membrane. The method may include the step of heating the mixture of gases to a temperature of greater than about 400.degree. C. and less than about 1000.degree. C. before the step of flowing the mixture of gases past the membrane. The mixture of gases may include ammonia. The ammonia typically is decomposed to provide nitrogen and hydrogen using a catalyst such as nickel. The catalyst may be placed inside the tubular ceramic support. The mixture of gases may be supplied by an industrial process such as the mixture of exhaust gases from the IGCC process.

  18. Nonassociative Plasticity Alters Competitive Interactions Among Mixture Components In Early Olfactory Processing

    PubMed Central

    Locatelli, Fernando F; Fernandez, Patricia C; Villareal, Francis; Muezzinoglu, Kerem; Huerta, Ramon; Galizia, C. Giovanni; Smith, Brian H.

    2012-01-01

    Experience related plasticity is an essential component of networks involved in early olfactory processing. However, the mechanisms and functions of plasticity in these neural networks are not well understood. We studied nonassociative plasticity by evaluating responses to two pure odors (A and X) and their binary mixture using calcium imaging of odor elicited activity in output neurons of the honey bee antennal lobe. Unreinforced exposure to A or X produced no change in the neural response elicited by the pure odors. However, exposure to one odor (e.g. A) caused the response to the mixture to become more similar to the other component (X). We also show in behavioral analyses that unreinforced exposure to A caused the mixture to become perceptually more similar to X. These results suggest that nonassociative plasticity modifies neural networks in such a way that it affects local competitive interactions among mixture components. We used a computational model to evaluate the most likely targets for modification. Hebbian modification of synapses from inhibitory local interneurons to projection neurons most reliably produces the observed shift in response to the mixture. These results are consistent with a model in which the antennal lobe acts to filter olfactory information according to its relevance for performing a particular task. PMID:23167675

  19. Cancer Dose-Response Assessment for Polychlorinated Biphenyls (PCBs) and Application to Environmental Mixtures

    EPA Pesticide Factsheets

    This report updates the cancer dose-response assessment for PCBs and shows how information on toxicity, disposition, and environmental processes can be considered together to evaluate health risks from PCB mixtures in the environment.

  20. Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro

    2017-04-01

    We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.

Top