Sample records for normal mixture model

  1. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  2. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  3. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  4. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  5. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  6. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  7. A Skew-Normal Mixture Regression Model

    ERIC Educational Resources Information Center

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  8. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions

    PubMed Central

    Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.

    2009-01-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  9. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  10. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  11. Using partially labeled data for normal mixture identification with application to class definition

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  12. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  13. Investigation into the performance of different models for predicting stutter.

    PubMed

    Bright, Jo-Anne; Curran, James M; Buckleton, John S

    2013-07-01

    In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants

    NASA Astrophysics Data System (ADS)

    Bernhoff, Niclas

    2018-05-01

    An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.

  15. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  16. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  17. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  18. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    NASA Astrophysics Data System (ADS)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  19. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  20. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  1. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  2. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  3. Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering

    DTIC Science & Technology

    2005-08-04

    describe a four-band magnetic resonance image (MRI) consisting of 23,712 pixels of a brain with a tumor 2. Because of the size of the dataset, it is not...the Royal Statistical Society, Series B 56, 363–375. Figueiredo, M. A. T. and A. K. Jain (2002). Unsupervised learning of finite mixture models. IEEE...20 5.4 Brain MRI

  4. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  5. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  6. heterogeneous mixture distributions for multi-source extreme rainfall

    NASA Astrophysics Data System (ADS)

    Ouarda, T.; Shin, J.; Lee, T. S.

    2013-12-01

    Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.

  7. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  8. On an interface of the online system for a stochastic analysis of the varied information flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorshenin, Andrey K.; MIREA, MGUPI; Kuzmin, Victor Yu.

    The article describes a possible approach to the construction of an interface of an online asynchronous system that allows researchers to analyse varied information flows. The implemented stochastic methods are based on the mixture models and the method of moving separation of mixtures. The general ideas of the system functionality are demonstrated on an example for some moments of a finite normal mixture.

  9. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  10. Atmospheric Photochemical Modeling of Turbine Engine Fuels and Exhausts. Phase 2. Computer Model Development. Volume 2

    DTIC Science & Technology

    1988-05-01

    represented name Emitted Organics Included in All Models CO Carbon Monoxide C:C, Ethene HCHO Formaldehyde CCHO Acetaldehyde RCHO Propionaldehyde and other...of species in the mixture, and for proper use of this program, these files should be "normalized," i.e., the number of carbons in the mixture should...scenario in memory. Valid parmtypes are SCEN, PHYS, CHEM, VP, NSP, OUTP, SCHEDS. LIST ALLCOMP Lists all available composition filenames. LIST ALLSCE

  11. Differential models of twin correlations in skew for body-mass index (BMI).

    PubMed

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  12. Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.

    PubMed

    Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira

    2012-07-15

    Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com

  13. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  14. An a priori DNS study of the shadow-position mixing model

    DOE PAGES

    Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...

    2016-01-15

    In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less

  15. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  16. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.

    In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less

  18. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  20. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  1. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  2. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  3. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  4. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  5. Two-component mixture model: Application to palm oil and exchange rate

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  6. Latent Partially Ordered Classification Models and Normal Mixtures

    ERIC Educational Resources Information Center

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  7. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  8. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  9. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  10. Realized Volatility Analysis in A Spin Model of Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.

  11. The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.

    PubMed

    Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W

    2018-01-01

    Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.

  12. Approximation of a radial diffusion model with a multiple-rate model for hetero-disperse particle mixtures

    PubMed Central

    Ju, Daeyoung; Young, Thomas M.; Ginn, Timothy R.

    2012-01-01

    An innovative method is proposed for approximation of the set of radial diffusion equations governing mass exchange between aqueous bulk phase and intra-particle phase for a hetero-disperse mixture of particles such as occur in suspension in surface water, in riverine/estuarine sediment beds, in soils and in aquifer materials. For this purpose the temporal variation of concentration at several uniformly distributed points within a normalized representative particle with spherical, cylindrical or planar shape is fitted with a 2-domain linear reversible mass exchange model. The approximation method is then superposed in order to generalize the model to a hetero-disperse mixture of particles. The method can reduce the computational effort needed in solving the intra-particle mass exchange of a hetero-disperse mixture of particles significantly and also the error due to the approximation is shown to be relatively small. The method is applied to describe desorption batch experiment of 1,2-Dichlorobenzene from four different soils with known particle size distributions and it could produce good agreement with experimental data. PMID:18304692

  13. Intestinal absorption of an arginine-containing peptide in cystinuria

    PubMed Central

    Asatoor, A. M.; Harrison, B. D. W.; Milne, M. D.; Prosser, D. I.

    1972-01-01

    Separate tolerance tests involving oral intake of the dipeptide, L-arginyl-L-aspartate, and of a corresponding free amino acid mixture, were carried out in a single type 2 cystinuric patient. Absorption of aspartate was within normal limits, whilst that of arginine was normal after the peptide but considerably reduced after the amino acid mixture. The results are compared with the increments of serum arginine found in eight normal subjects after the oral intake of the free amino acid mixture. Analyses of urinary pyrrolidine and of tetramethylenediamine in urine samples obtained after the two tolerance tests in the patient support the view that arginine absorption was subnormal after the amino acid mixture but within normal limits after the dipeptide. PMID:5045711

  14. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    PubMed

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  15. Modeling of active transmembrane transport in a mixture theory framework.

    PubMed

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T

    2010-05-01

    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  16. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  17. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures

    PubMed Central

    Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald

    2017-01-01

    In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587

  19. A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores

    PubMed Central

    Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn

    2013-01-01

    Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059

  20. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  1. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    ERIC Educational Resources Information Center

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  2. Evaluating sufficient similarity for drinking-water disinfection by-product (DBP) mixtures with bootstrap hypothesis test procedures.

    PubMed

    Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn

    2009-01-01

    In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.

  3. Internal structure of shock waves in disparate mass mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    The detailed flow structure of a normal shock wave for a gas mixture is investigated using the direct-simulation Monte Carlo method. A variable diameter hard-sphere (VDHS) model is employed to investigate the effect of different viscosity temperature exponents (VTE) for each species in a gas mixture. Special attention is paid to the irregular behavior in the density profiles which was previously observed in a helium-xenon experiment. It is shown that the VTE can have substantial effects in the prediction of the structure of shock waves. The variable hard-sphere model of Bird shows good agreement, but with some limitations, with the experimental data if a common VTE is chosen properly for each case. The VDHS model shows better agreement with the experimental data without adjusting the VTE. The irregular behavior of the light-gas component in shock waves of disparate mass mixtures is observed not only in the density profile, but also in the parallel temperature profile. The strength of the shock wave, the type of molecular interactions, and the mole fraction of heavy species have substantial effects on the existence and structure of the irregularities.

  4. Continuous plutonium dissolution apparatus

    DOEpatents

    Meyer, F.G.; Tesitor, C.N.

    1974-02-26

    This invention is concerned with continuous dissolution of metals such as plutonium. A high normality acid mixture is fed into a boiler vessel, vaporized, and subsequently condensed as a low normality acid mixture. The mixture is then conveyed to a dissolution vessel and contacted with the plutonium metal to dissolve the plutonium in the dissolution vessel, reacting therewith forming plutonium nitrate. The reaction products are then conveyed to the mixing vessel and maintained soluble by the high normality acid, with separation and removal of the desired constituent. (Official Gazette)

  5. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  6. Identification of Allelic Imbalance with a Statistical Model for Subtle Genomic Mosaicism

    PubMed Central

    Xia, Rui; Vattathil, Selina; Scheet, Paul

    2014-01-01

    Genetic heterogeneity in a mixed sample of tumor and normal DNA can confound characterization of the tumor genome. Numerous computational methods have been proposed to detect aberrations in DNA samples from tumor and normal tissue mixtures. Most of these require tumor purities to be at least 10–15%. Here, we present a statistical model to capture information, contained in the individual's germline haplotypes, about expected patterns in the B allele frequencies from SNP microarrays while fully modeling their magnitude, the first such model for SNP microarray data. Our model consists of a pair of hidden Markov models—one for the germline and one for the tumor genome—which, conditional on the observed array data and patterns of population haplotype variation, have a dependence structure induced by the relative imbalance of an individual's inherited haplotypes. Together, these hidden Markov models offer a powerful approach for dealing with mixtures of DNA where the main component represents the germline, thus suggesting natural applications for the characterization of primary clones when stromal contamination is extremely high, and for identifying lesions in rare subclones of a tumor when tumor purity is sufficient to characterize the primary lesions. Our joint model for germline haplotypes and acquired DNA aberration is flexible, allowing a large number of chromosomal alterations, including balanced and imbalanced losses and gains, copy-neutral loss-of-heterozygosity (LOH) and tetraploidy. We found our model (which we term J-LOH) to be superior for localizing rare aberrations in a simulated 3% mixture sample. More generally, our model provides a framework for full integration of the germline and tumor genomes to deal more effectively with missing or uncertain features, and thus extract maximal information from difficult scenarios where existing methods fail. PMID:25166618

  7. Combining Mixture Components for Clustering*

    PubMed Central

    Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël

    2010-01-01

    Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302

  8. Human Language Technology: Opportunities and Challenges

    DTIC Science & Technology

    2005-01-01

    because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with

  9. Determining the Number of Component Clusters in the Standard Multivariate Normal Mixture Model Using Model-Selection Criteria.

    DTIC Science & Technology

    1983-06-16

    has been advocated by Gnanadesikan and 𔃾ilk (1969), and others in the literature. This suggests that, if we use the formal signficance test type...American Statistical Asso., 62, 1159-1178. Gnanadesikan , R., and Wilk, M..B. (1969). Data Analytic Methods in Multi- variate Statistical Analysis. In

  10. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    PubMed Central

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  11. Bayesian mixture modeling for blood sugar levels of diabetes mellitus patients (case study in RSUD Saiful Anwar Malang Indonesia)

    NASA Astrophysics Data System (ADS)

    Budi Astuti, Ani; Iriawan, Nur; Irhamah; Kuswanto, Heri; Sasiarini, Laksmi

    2017-10-01

    Bayesian statistics proposes an approach that is very flexible in the number of samples and distribution of data. Bayesian Mixture Model (BMM) is a Bayesian approach for multimodal models. Diabetes Mellitus (DM) is more commonly known in the Indonesian community as sweet pee. This disease is one type of chronic non-communicable diseases but it is very dangerous to humans because of the effects of other diseases complications caused. WHO reports in 2013 showed DM disease was ranked 6th in the world as the leading causes of human death. In Indonesia, DM disease continues to increase over time. These research would be studied patterns and would be built the BMM models of the DM data through simulation studies where the simulation data built on cases of blood sugar levels of DM patients in RSUD Saiful Anwar Malang. The results have been successfully demonstrated pattern of distribution of the DM data which has a normal mixture distribution. The BMM models have succeed to accommodate the real condition of the DM data based on the data driven concept.

  12. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    PubMed

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)

  13. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  15. Effect of the NACA Injection Impeller on the Mixture Distribution of a Double-row Radial Aircraft Engine

    NASA Technical Reports Server (NTRS)

    Marble, Frank E.; Ritter, William K.; Miller, Mahlon A.

    1946-01-01

    For the normal range of engine power the impeller provided marked improvement over the standard spray-bar injection system. Mixture distribution at cruising was excellent, maximum cylinder temperatures were reduced about 30 degrees F, and general temperature distribution was improved. The uniform mixture distribution restored the normal response of cylinder temperature to mixture enrichment and it reduced the possibility of carburetor icing, while no serious loss in supercharger pressure rise resulted from injection of fuel near the impeller outlet. The injection impeller also furnished a convenient means of adding water to the charge mixture for internal cooling.

  16. Item selection via Bayesian IRT models.

    PubMed

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Reduced-order modellin for high-pressure transient flow of hydrogen-natural gas mixture

    NASA Astrophysics Data System (ADS)

    Agaie, Baba G.; Khan, Ilyas; Alshomrani, Ali Saleh; Alqahtani, Aisha M.

    2017-05-01

    In this paper the transient flow of hydrogen compressed-natural gas (HCNG) mixture which is also referred to as hydrogen-natural gas mixture in a pipeline is numerically computed using the reduced-order modelling technique. The study on transient conditions is important because the pipeline flows are normally in the unsteady state due to the sudden opening and closure of control valves, but most of the existing studies only analyse the flow in the steady-state conditions. The mathematical model consists in a set of non-linear conservation forms of partial differential equations. The objective of this paper is to improve the accuracy in the prediction of the HCNG transient flow parameters using the Reduced-Order Modelling (ROM). The ROM technique has been successfully used in single-gas and aerodynamic flow problems, the gas mixture has not been done using the ROM. The study is based on the velocity change created by the operation of the valves upstream and downstream the pipeline. Results on the flow characteristics, namely the pressure, density, celerity and mass flux are based on variations of the mixing ratio and valve reaction and actuation time; the ROM computational time cost advantage are also presented.

  18. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  19. CFD Modeling of Helium Pressurant Effects on Cryogenic Tank Pressure Rise Rates in Normal Gravity

    NASA Technical Reports Server (NTRS)

    Grayson, Gary; Lopez, Alfredo; Chandler, Frank; Hastings, Leon; Hedayat, Ali; Brethour, James

    2007-01-01

    A recently developed computational fluid dynamics modeling capability for cryogenic tanks is used to simulate both self-pressurization from external heating and also depressurization from thermodynamic vent operation. Axisymmetric models using a modified version of the commercially available FLOW-3D software are used to simulate actual physical tests. The models assume an incompressible liquid phase with density that is a function of temperature only. A fully compressible formulation is used for the ullage gas mixture that contains both condensable vapor and a noncondensable gas component. The tests, conducted at the NASA Marshall Space Flight Center, include both liquid hydrogen and nitrogen in tanks with ullage gas mixtures of each liquid's vapor and helium. Pressure and temperature predictions from the model are compared to sensor measurements from the tests and a good agreement is achieved. This further establishes the accuracy of the developed FLOW-3D based modeling approach for cryogenic systems.

  20. The Regular Interaction Pattern among Odorants of the Same Type and Its Application in Odor Intensity Assessment.

    PubMed

    Yan, Luchun; Liu, Jiemin; Jiang, Shen; Wu, Chuandong; Gao, Kewei

    2017-07-13

    The olfactory evaluation function (e.g., odor intensity rating) of e-nose is always one of the most challenging issues in researches about odor pollution monitoring. But odor is normally produced by a set of stimuli, and odor interactions among constituents significantly influenced their mixture's odor intensity. This study investigated the odor interaction principle in odor mixtures of aldehydes and esters, respectively. Then, a modified vector model (MVM) was proposed and it successfully demonstrated the similarity of the odor interaction pattern among odorants of the same type. Based on the regular interaction pattern, unlike a determined empirical model only fit for a specific odor mixture in conventional approaches, the MVM distinctly simplified the odor intensity prediction of odor mixtures. Furthermore, the MVM also provided a way of directly converting constituents' chemical concentrations to their mixture's odor intensity. By combining the MVM with usual data-processing algorithm of e-nose, a new e-nose system was established for an odor intensity rating. Compared with instrumental analysis and human assessor, it exhibited accuracy well in both quantitative analysis (Pearson correlation coefficient was 0.999 for individual aldehydes ( n = 12), 0.996 for their binary mixtures ( n = 36) and 0.990 for their ternary mixtures ( n = 60)) and odor intensity assessment (Pearson correlation coefficient was 0.980 for individual aldehydes ( n = 15), 0.973 for their binary mixtures ( n = 24), and 0.888 for their ternary mixtures ( n = 25)). Thus, the observed regular interaction pattern is considered an important foundation for accelerating extensive application of olfactory evaluation in odor pollution monitoring.

  1. On The Value at Risk Using Bayesian Mixture Laplace Autoregressive Approach for Modelling the Islamic Stock Risk Investment

    NASA Astrophysics Data System (ADS)

    Miftahurrohmah, Brina; Iriawan, Nur; Fithriasari, Kartika

    2017-06-01

    Stocks are known as the financial instruments traded in the capital market which have a high level of risk. Their risks are indicated by their uncertainty of their return which have to be accepted by investors in the future. The higher the risk to be faced, the higher the return would be gained. Therefore, the measurements need to be made against the risk. Value at Risk (VaR) as the most popular risk measurement method, is frequently ignore when the pattern of return is not uni-modal Normal. The calculation of the risks using VaR method with the Normal Mixture Autoregressive (MNAR) approach has been considered. This paper proposes VaR method couple with the Mixture Laplace Autoregressive (MLAR) that would be implemented for analysing the first three biggest capitalization Islamic stock return in JII, namely PT. Astra International Tbk (ASII), PT. Telekomunikasi Indonesia Tbk (TLMK), and PT. Unilever Indonesia Tbk (UNVR). Parameter estimation is performed by employing Bayesian Markov Chain Monte Carlo (MCMC) approaches.

  2. Protanomaly-without-darkened-red is deuteranopia with rods

    PubMed Central

    Shevell, Steven K.; Sun, Yang; Neitz, Maureen

    2008-01-01

    The Rayleigh match, a color match between a mixture of 545+670 nm lights and 589 nm light in modern instruments, is the definitive measurement for the diagnosis of inherited red/green color defects. All trichromats, whether normal or anomalous, have a limited range of 545+670 nm mixtures they perceive to match 589 nm: a typical color-normal match-range is about 50–55% of 670 nm in the mixture (deutan mode), while deuteranomals have a range that includes mixtures with less 670 nm than normal and protanomals a range that includes mixtures with more 670 nm than normal. Further, the matching luminance of the 589 nm light for deuteranomals is the same as for normals but for protanomals is below normal. An example of an unexpected Rayleigh match, therefore, is a match range above normal (typical of protanomaly) and a normal luminance setting for 589 nm (typical of deuteranomaly), a match that Pickford (1950) called protanomaly “when the red end of the spectrum is not darkened”. In this case, Rayleigh matching does not yield a clear diagnosis. Aside from Pickford, we are aware of only one other report of a similar observer (Pokorny and Smith, 1981); this study predated modern genetic techniques that can reveal the cone photopigment(s) in the red/green range. We recently had the opportunity to conduct genetic and psychophysical tests on such an observer. Genetic results predict he is a deuteranope. His Rayleigh match is consistent with L cones and a contribution from rods. Further, with a rod-suppressing background, his Rayleigh match is characteristic of a single L-cone photopigment (deuteranopia). PMID:18423511

  3. Protanomaly without darkened red is deuteranopia with rods.

    PubMed

    Shevell, Steven K; Sun, Yang; Neitz, Maureen

    2008-11-01

    The Rayleigh match, a color match between a mixture of 545+670 nm lights and 589 nm light in modern instruments, is the definitive measurement for the diagnosis of inherited red-green color defects. All trichromats, whether normal or anomalous, have a limited range of 545+670 nm mixtures they perceive to match 589 nm: a typical color-normal match range is about 50-55% of 670 nm in the mixture (deutan mode), while deuteranomals have a range that includes mixtures with less 670 nm than normal and protanomals a range that includes mixtures with more 670 nm than normal. Further, the matching luminance of the 589 nm light for deuteranomals is the same as for normals but for protanomals is below normal. An example of an unexpected Rayleigh match, therefore, is a match range above normal (typical of protanomaly) and a normal luminance setting for 589 nm (typical of deuteranomaly), a match called protanomaly "when the red end of the spectrum is not darkened" [Pickford, R.W. (1950). Three pedigrees for color blindness. Nature, 165, 182.]. In this case, Rayleigh matching does not yield a clear diagnosis. Aside from Pickford, we are aware of only one other report of a similar observer [Pokorny, J., & Smith, V. C. (1981). A variant of red-green color defect. Vision Research, 21, 311-317]; this study predated modern genetic techniques that can reveal the cone photopigment(s) in the red-green range. We recently had the opportunity to conduct genetic and psychophysical tests on such an observer. Genetic results predict he is a deuteranope. His Rayleigh match is consistent with L cones and a contribution from rods. Further, with a rod-suppressing background, his Rayleigh match is characteristic of a single L-cone photopigment (deuteranopia).

  4. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    PubMed Central

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2 Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual’s total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3 In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence’s AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. A portion of these differences are due to the nature of the convenience (RIOPA) and representative (NHANES) sampling strategies used in the two studies. CONCLUSIONS Accurate models for exposure data, which can feature extreme values, multiple modes, data below the MDL, heterogeneous interpollutant dependency structures, and other complex characteristics, are needed to estimate exposures and risks and to develop control and management guidelines and policies. Conventional and novel statistical methods were applied to data drawn from two large studies to understand the nature and significance of VOC exposures. Both extreme value distributions and mixture models were found to provide excellent fit to single VOC compounds (univariate distributions), and copulas may be the method of choice for VOC mixtures (multivariate distributions), especially for the highest exposures, which fit parametric models poorly and which may represent the greatest health risk. The identification of exposure determinants, including the influence of both certain activities (e.g., pumping gas) and environments (e.g., residences), provides information that can be used to manage and reduce exposures. The results obtained using the RIOPA data set add to our understanding of VOC exposures and further investigations using a more representative population and a wider suite of VOCs are suggested to extend and generalize results. PMID:25145040

  5. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  6. The propulsive capability of explosives heavily loaded with inert materials

    NASA Astrophysics Data System (ADS)

    Loiseau, J.; Georges, W.; Frost, D. L.; Higgins, A. J.

    2018-01-01

    The effect of inert dilution on the accelerating ability of high explosives for both grazing and normal detonations was studied. The explosives considered were: (1) neat, amine-sensitized nitromethane (NM), (2) packed beds of glass, steel, or tungsten particles saturated with amine-sensitized NM, (3) NM gelled with PMMA containing dispersed glass microballoons, (4) NM gelled with PMMA containing glass microballoons and steel particles, and (5) C-4 containing varying mass fractions of glass or steel particles. Flyer velocity was measured via photonic Doppler velocimetry, and the results were analysed using a Gurney model augmented to include the influence of the diluent. Reduction in accelerating ability with increasing dilution for the amine-sensitized NM, gelled NM, and C-4 was measured experimentally. Variation of flyer terminal velocity with the ratio of flyer mass to charge mass (M/C) was measured for both grazing and normally incident detonations in gelled NM containing 10% microballoons by mass and for steel beads saturated with amine-sensitized NM. Finally, flyer velocity was measured in grazing versus normal loading for a number of explosive admixtures. The augmented Gurney model predicted the effect of dilution on accelerating ability and the scaling of flyer velocity with M/C for mixtures containing low-density diluents. The augmented Gurney model failed to predict the scaling of flyer velocity with M/C for mixtures heavily loaded with dense diluents. In all cases, normally incident detonations propelled flyers to higher velocity than the equivalent grazing detonations because of material velocity imparted by the incident shock wave and momentum/energy transfer from the slapper used to uniformly initiate the charge.

  7. The propulsive capability of explosives heavily loaded with inert materials

    NASA Astrophysics Data System (ADS)

    Loiseau, J.; Georges, W.; Frost, D. L.; Higgins, A. J.

    2018-07-01

    The effect of inert dilution on the accelerating ability of high explosives for both grazing and normal detonations was studied. The explosives considered were: (1) neat, amine-sensitized nitromethane (NM), (2) packed beds of glass, steel, or tungsten particles saturated with amine-sensitized NM, (3) NM gelled with PMMA containing dispersed glass microballoons, (4) NM gelled with PMMA containing glass microballoons and steel particles, and (5) C-4 containing varying mass fractions of glass or steel particles. Flyer velocity was measured via photonic Doppler velocimetry, and the results were analysed using a Gurney model augmented to include the influence of the diluent. Reduction in accelerating ability with increasing dilution for the amine-sensitized NM, gelled NM, and C-4 was measured experimentally. Variation of flyer terminal velocity with the ratio of flyer mass to charge mass ( M/ C) was measured for both grazing and normally incident detonations in gelled NM containing 10% microballoons by mass and for steel beads saturated with amine-sensitized NM. Finally, flyer velocity was measured in grazing versus normal loading for a number of explosive admixtures. The augmented Gurney model predicted the effect of dilution on accelerating ability and the scaling of flyer velocity with M/ C for mixtures containing low-density diluents. The augmented Gurney model failed to predict the scaling of flyer velocity with M/ C for mixtures heavily loaded with dense diluents. In all cases, normally incident detonations propelled flyers to higher velocity than the equivalent grazing detonations because of material velocity imparted by the incident shock wave and momentum/energy transfer from the slapper used to uniformly initiate the charge.

  8. Dynamics of Aqueous Foam Drops

    NASA Technical Reports Server (NTRS)

    Akhatov, Iskander; McDaniel, J. Gregory; Holt, R. Glynn

    2001-01-01

    We develop a model for the nonlinear oscillations of spherical drops composed of aqueous foam. Beginning with a simple mixture law, and utilizing a mass-conserving bubble-in-cell scheme, we obtain a Rayleigh-Plesset-like equation for the dynamics of bubbles in a foam mixture. The dispersion relation for sound waves in a bubbly liquid is then coupled with a normal modes expansion to derive expressions for the frequencies of eigenmodal oscillations. These eigenmodal (breathing plus higher-order shape modes) frequencies are elicited as a function of the void fraction of the foam. A Mathieu-like equation is obtained for the dynamics of the higher-order shape modes and their parametric coupling to the breathing mode. The proposed model is used to explain recently obtained experimental data.

  9. The CLASSY clustering algorithm: Description, evaluation, and comparison with the iterative self-organizing clustering system (ISOCLS). [used for LACIE data

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Malek, H.

    1978-01-01

    A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.

  10. 14 CFR 23.1147 - Mixture controls.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Mixture controls. 23.1147 Section 23.1147... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1147 Mixture controls. (a) If there are mixture controls, each engine must have a separate...

  11. 14 CFR 23.1147 - Mixture controls.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Mixture controls. 23.1147 Section 23.1147... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Powerplant Powerplant Controls and Accessories § 23.1147 Mixture controls. (a) If there are mixture controls, each engine must have a separate...

  12. Mixture modeling methods for the assessment of normal and abnormal personality, part I: cross-sectional models.

    PubMed

    Hallquist, Michael N; Wright, Aidan G C

    2014-01-01

    Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.

  13. Estimating Mixture of Gaussian Processes by Kernel Smoothing

    PubMed Central

    Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin

    2014-01-01

    When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675

  14. 14 CFR 27.1147 - Mixture controls.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Mixture controls. 27.1147 Section 27.1147... STANDARDS: NORMAL CATEGORY ROTORCRAFT Powerplant Powerplant Controls and Accessories § 27.1147 Mixture controls. If there are mixture controls, each engine must have a separate control and the controls must be...

  15. Use of an Amino Acid Mixture in Treatment of Phenylketonuria

    PubMed Central

    Bentovim, A.; Clayton, Barbara E.; Francis, Dorothy E. M.; Shepherd, Jean; Wolff, O. H.

    1970-01-01

    Twelve children with phenylketonuria diagnosed and treated from the first few weeks of life were grouped into pairs. Before the trial all of them were receiving a commercial preparation containing a protein hydrolysate low in phenylalanine (Cymogran, Allen and Hanburys Ltd.) as a substitute for natural protein. One of each pair was given an amino acid mixture instead of Cymogran for about 6 months. Use of the mixture involved considerable modification of the diet, and in particular the inclusion of greater amounts of phenylalanine-free foods. All six accepted the new mixture without difficulty, food problems were greatly reduced, parents welcomed the new preparation, and the quality of family life improved. Normal growth was maintained and with a mixture of l amino acids the plasma and urinary amino acid levels were normal. Further studies are needed before the mixture can be recommended for children under 20 months of age. PMID:5477678

  16. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields.

    PubMed

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2016-05-01

    To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.

  17. Selecting statistical model and optimum maintenance policy: a case study of hydraulic pump.

    PubMed

    Ruhi, S; Karim, M R

    2016-01-01

    Proper maintenance policy can play a vital role for effective investigation of product reliability. Every engineered object such as product, plant or infrastructure needs preventive and corrective maintenance. In this paper we look at a real case study. It deals with the maintenance of hydraulic pumps used in excavators by a mining company. We obtain the data that the owner had collected and carry out an analysis and building models for pump failures. The data consist of both failure and censored lifetimes of the hydraulic pump. Different competitive mixture models are applied to analyze a set of maintenance data of a hydraulic pump. Various characteristics of the mixture models, such as the cumulative distribution function, reliability function, mean time to failure, etc. are estimated to assess the reliability of the pump. Akaike Information Criterion, adjusted Anderson-Darling test statistic, Kolmogrov-Smirnov test statistic and root mean square error are considered to select the suitable models among a set of competitive models. The maximum likelihood estimation method via the EM algorithm is applied mainly for estimating the parameters of the models and reliability related quantities. In this study, it is found that a threefold mixture model (Weibull-Normal-Exponential) fits well for the hydraulic pump failures data set. This paper also illustrates how a suitable statistical model can be applied to estimate the optimum maintenance period at a minimum cost of a hydraulic pump.

  18. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A Preliminary Comparison of the Effectiveness of Cluster Analysis Weighting Procedures for Within-Group Covariance Structure.

    ERIC Educational Resources Information Center

    Donoghue, John R.

    A Monte Carlo study compared the usefulness of six variable weighting methods for cluster analysis. Data were 100 bivariate observations from 2 subgroups, generated according to a finite normal mixture model. Subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. Of the clustering…

  20. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  1. A Zero- and K-Inflated Mixture Model for Health Questionnaire Data

    PubMed Central

    Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.

    2011-01-01

    In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673

  2. Impact of Lead Time and Safety Factor in Mixed Inventory Models with Backorder Discounts

    NASA Astrophysics Data System (ADS)

    Lo, Ming-Cheng; Chao-Hsien Pan, Jason; Lin, Kai-Cing; Hsu, Jia-Wei

    This study investigates the impact of safety factor on the continuous review inventory model involving controllable lead time with mixture of backorder discount and partial lost sales. The objective is to minimize the expected total annual cost with respect to order quantity, backorder price discount, safety factor and lead time. A model with normal demand is also discussed. Numerical examples are presented to illustrate the procedures of the algorithms and the effects of parameters on the result of the proposed models are analyzed.

  3. Nutritional support contributes to recuperation in a rat model of aplastic anemia by enhancing mitochondrial function.

    PubMed

    Yang, Guang; Zhao, Lifen; Liu, Bing; Shan, Yujia; Li, Yang; Zhou, Huimin; Jia, Li

    2018-02-01

    Acquired aplastic anemia (AA) is a hematopoietic stem cell disease that leads to hematopoietic disorder and peripheral blood pancytopenia. We investigated whether nutritional support is helpful to AA recovery. We established a rat model with AA. A nutrient mixture was administered to rats with AA through different dose gavage once per day for 55 d. Animals in this study were assigned to one of five groups: normal control (NC; group includes normal rats); AA (rats with AA); high dose (AA + nutritional mixture, 2266.95 mg/kg/d); medium dose (1511.3 mg/kg/d); and low dose (1057.91 mg/kg/d). The effects of nutrition administration on general status and mitochondrial function of rats with AA were evaluated. The nutrient mixture with which the rats were supplemented significantly improved weight, peripheral blood parameters, and histologic parameters of rats with AA in a dose-dependent manner. Furthermore, we observed that the number of mitochondria in the liver, spleen, kidney, and brain was increased after supplementation by transmission electron microscopy analysis. Nutrient administration also improved mitochondrial DNA content, adenosine triphosphate content, and membrane potential but inhibited oxidative stress, thus, repairing the mitochondrial dysfunction of the rats with AA. Taken together, nutrition supplements may contribute to the improvement of mitochondrial function and play an important role in the recuperation of rats with AA. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A mixture model-based approach to the clustering of microarray expression data.

    PubMed

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  5. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  6. Pairwise mixture model for unmixing partial volume effect in multi-voxel MR spectroscopy of brain tumour patients

    NASA Astrophysics Data System (ADS)

    Olliverre, Nathan; Asad, Muhammad; Yang, Guang; Howe, Franklyn; Slabaugh, Gregory

    2017-03-01

    Multi-Voxel Magnetic Resonance Spectroscopy (MV-MRS) provides an important and insightful technique for the examination of the chemical composition of brain tissue, making it an attractive medical imaging modality for the examination of brain tumours. MRS, however, is affected by the issue of the Partial Volume Effect (PVE), where the signals of multiple tissue types can be found within a single voxel and provides an obstacle to the interpretation of the data. The PVE results from the low resolution achieved in MV-MRS images relating to the signal to noise ratio (SNR). To counteract PVE, this paper proposes a novel Pairwise Mixture Model (PMM), that extends a recently reported Signal Mixture Model (SMM) for representing the MV-MRS signal as normal, low or high grade tissue types. Inspired by Conditional Random Field (CRF) and its continuous variant the PMM incorporates the surrounding voxel neighbourhood into an optimisation problem, the solution of which provides an estimation to a set of coefficients. The values of the estimated coefficients represents the amount of each tissue type (normal, low or high) found within a voxel. These coefficients can then be visualised as a nosological rendering using a coloured grid representing the MV-MRS image overlaid on top of a structural image, such as a Magnetic Resonance Image (MRI). Experimental results show an accuracy of 92.69% in classifying patient tumours as either low or high grade compared against the histopathology for each patient. Compared to 91.96% achieved by the SMM, the proposed PMM method demonstrates the importance of incorporating spatial coherence into the estimation as well as its potential clinical usage.

  7. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    NASA Astrophysics Data System (ADS)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  8. Electron Transport Coefficients and Effective Ionization Coefficients in SF6-O2 and SF6-Air Mixtures Using Boltzmann Analysis

    NASA Astrophysics Data System (ADS)

    Wei, Linsheng; Xu, Min; Yuan, Dingkun; Zhang, Yafang; Hu, Zhaoji; Tan, Zhihong

    2014-10-01

    The electron drift velocity, electron energy distribution function (EEDF), density-normalized effective ionization coefficient and density-normalized longitudinal diffusion velocity are calculated in SF6-O2 and SF6-Air mixtures. The experimental results from a pulsed Townsend discharge are plotted for comparison with the numerical results. The reduced field strength varies from 40 Td to 500 Td (1 Townsend=10-17 V·cm2) and the SF6 concentration ranges from 10% to 100%. A Boltzmann equation associated with the two-term spherical harmonic expansion approximation is utilized to gain the swarm parameters in steady-state Townsend. Results show that the accuracy of the Boltzmann solution with a two-term expansion in calculating the electron drift velocity, electron energy distribution function, and density-normalized effective ionization coefficient is acceptable. The effective ionization coefficient presents a distinct relationship with the SF6 content in the mixtures. Moreover, the E/Ncr values in SF6-Air mixtures are higher than those in SF6-O2 mixtures and the calculated value E/Ncr in SF6-O2 and SF6-Air mixtures is lower than the measured value in SF6-N2. Parametric studies conducted on these parameters using the Boltzmann analysis offer substantial insight into the plasma physics, as well as a basis to explore the ozone generation process.

  9. Spherically symmetric Einstein-aether perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coley, Alan A.; Latta, Joey; Leon, Genly

    We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less

  10. Determining inert content in coal dust/rock dust mixture

    DOEpatents

    Sapko, Michael J.; Ward, Jr., Jack A.

    1989-01-01

    A method and apparatus for determining the inert content of a coal dust and rock dust mixture uses a transparent window pressed against the mixture. An infrared light beam is directed through the window such that a portion of the infrared light beam is reflected from the mixture. The concentration of the reflected light is detected and a signal indicative of the reflected light is generated. A normalized value for the generated signal is determined according to the relationship .phi.=(log i.sub.c `log i.sub.co) / (log i.sub.c100 -log i.sub.co) where i.sub.co =measured signal at 0% rock dust i.sub.c100 =measured signal at 100% rock dust i.sub.c =measured signal of the mixture. This normalized value is then correlated to a predetermined relationship of .phi. to rock dust percentage to determine the rock dust content of the mixture. The rock dust content is displayed where the percentage is between 30 and 100%, and an indication of out-of-range is displayed where the rock dust percent is less than 30%. Preferably, the rock dust percentage (RD%) is calculated from the predetermined relationship RD%=100+30 log .phi.. where the dust mixture initially includes moisture, the dust mixture is dried before measuring by use of 8 to 12 mesh molecular-sieves which are shaken with the dust mixture and subsequently screened from the dust mixture.

  11. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  13. Chemical mixtures in untreated water from public-supply wells in the U.S. — Occurrence, composition, and potential toxicity

    USGS Publications Warehouse

    Toccalino, Patricia L.; Norman, Julia E.; Scott, Jonathon C.

    2012-01-01

    Chemical mixtures are prevalent in groundwater used for public water supply, but little is known about their potential health effects. As part of a large-scale ambient groundwater study, we evaluated chemical mixtures across multiple chemical classes, and included more chemical contaminants than in previous studies of mixtures in public-supply wells. We (1) assessed the occurrence of chemical mixtures in untreated source-water samples from public-supply wells, (2) determined the composition of the most frequently occurring mixtures, and (3) characterized the potential toxicity of mixtures using a new screening approach. The U.S. Geological Survey collected one untreated water sample from each of 383 public wells distributed across 35 states, and analyzed the samples for as many as 91 chemical contaminants. Concentrations of mixture components were compared to individual human-health benchmarks; the potential toxicity of mixtures was characterized by addition of benchmark-normalized component concentrations. Most samples (84%) contained mixtures of two or more contaminants, each at concentrations greater than one-tenth of individual benchmarks. The chemical mixtures that most frequently occurred and had the greatest potential toxicity primarily were composed of trace elements (including arsenic, strontium, or uranium), radon, or nitrate. Herbicides, disinfection by-products, and solvents were the most common organic contaminants in mixtures. The sum of benchmark-normalized concentrations was greater than 1 for 58% of samples, suggesting that there could be potential for mixtures toxicity in more than half of the public-well samples. Our findings can be used to help set priorities for groundwater monitoring and suggest future research directions for drinking-water treatment studies and for toxicity assessments of chemical mixtures in water resources.

  14. Comparison of NIR chemical imaging with conventional NIR, Raman and ATR-IR spectroscopy for quantification of furosemide crystal polymorphs in ternary powder mixtures.

    PubMed

    Schönbichler, S A; Bittner, L K H; Weiss, A K H; Griesser, U J; Pallua, J D; Huck, C W

    2013-08-01

    The aim of this study was to evaluate the ability of near-infrared chemical imaging (NIR-CI), near-infrared (NIR), Raman and attenuated-total-reflectance infrared (ATR-IR) spectroscopy to quantify three polymorphic forms (I, II, III) of furosemide in ternary powder mixtures. For this purpose, partial least-squares (PLS) regression models were developed, and different data preprocessing algorithms such as normalization, standard normal variate (SNV), multiplicative scatter correction (MSC) and 1st to 3rd derivatives were applied to reduce the influence of systematic disturbances. The performance of the methods was evaluated by comparison of the standard error of cross-validation (SECV), R(2), and the ratio performance deviation (RPD). Limits of detection (LOD) and limits of quantification (LOQ) of all methods were determined. For NIR-CI, a SECVcorr-spec and a SECVsingle-pixel corrected were calculated to assess the loss of accuracy by taking advantage of the spatial information. NIR-CI showed a SECVcorr-spec (SECVsingle-pixel corrected) of 2.82% (3.71%), 3.49% (4.65%), and 4.10% (5.06%) for form I, II, III. NIR had a SECV of 2.98%, 3.62%, and 2.75%, and Raman reached 3.25%, 3.08%, and 3.18%. The SECV of the ATR-IR models were 7.46%, 7.18%, and 12.08%. This study proves that NIR-CI, NIR, and Raman are well suited to quantify forms I-III of furosemide in ternary mixtures. Because of the pressure-dependent conversion of form II to form I, ATR-IR was found to be less appropriate for an accurate quantification of the mixtures. In this study, the capability of NIR-CI for the quantification of polymorphic ternary mixtures was compared with conventional spectroscopic techniques for the first time. For this purpose, a new way of spectra selection was chosen, and two kinds of SECVs were calculated to achieve a better comparability of NIR-CI to NIR, Raman, and ATR-IR. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    PubMed

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.

  16. Production of Normal Mammalian Organ Culture Using a Medium Containing Mem-Alpha, Leibovitz L 15, Glucose Galactose Fructose

    NASA Technical Reports Server (NTRS)

    Goodwin, Thomas J. (Inventor); Wolf, David A. (Inventor); Spaulding, Glenn F. (Inventor); Prewett, Tacey L. (Inventor)

    1999-01-01

    Normal mammalian tissue and the culturing process has been developed for the three groups of organ, structural and blood tissue. The cells are grown in vitro under micro- gravity culture conditions and form three dimensional cells aggregates with normal cell function. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel. The medium used for culturing the cells, especially a mixture of epithelial and mesenchymal cells contains a mixture of Mem-alpha and Leibovits L15 supplemented with glucose, galactose and fructose.

  17. Experimental evidence for killing the resistant cells and raising the efficacy and decreasing the toxicity of cytostatics and irradiation by mixtures of the agents of the passive antitumor defense system in the case of various tumor and normal cell lines in vitro.

    PubMed

    Kulcsár, Gyula

    2009-02-01

    Despite the substantial decline of the immune system in AIDS, only a few kinds of tumors increase in incidence. This shows that the immune system has no absolute role in the prevention of tumors. Therefore, the fact that tumors do not develop in the majority of the population during their lifetime indicates the existence of other defense system(s). According to our hypothesis, the defense is made by certain substances of the circulatory system. Earlier, on the basis of this hypothesis, we experimentally selected 16 substances of the circulatory system and demonstrated that the mixture of them (called active mixture) had a cytotoxic effect (inducing apoptosis) in vitro and in vivo on different tumor cell lines, but not on normal cells and animals. In this paper, we provide evidence that different cytostatic drugs or irradiation in combination with the active mixture killed significantly more cancer cells, compared with either treatments alone. The active mixture decreased, to a certain extent, the toxicity of cytostatics and irradiation on normal cells, but the most important result was that the active mixture destroyed the multidrug-resistant cells. Our results provide the possibility to improve the efficacy and reduce the side-effects of chemotherapy and radiation therapy and to prevent the relapse by killing the resistant cells.

  18. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    PubMed

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.

  19. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  20. Characterization and quantification of grape variety by means of shikimic acid concentration and protein fingerprint in still white wines.

    PubMed

    Chabreyrie, David; Chauvet, Serge; Guyon, François; Salagoïty, Marie-Hélène; Antinelli, Jean-François; Medina, Bernard

    2008-08-27

    Protein profiles, obtained by high-performance capillary electrophoresis (HPCE) on white wines previously dialyzed, combined with shikimic acid concentration and multivariate analysis, were used for the determination of grape variety composition of a still white wine. Six varieties were studied through monovarietal wines elaborated in the laboratory: Chardonnay (24 samples), Chenin (24), Petit Manseng (7), Sauvignon (37), Semillon (24), and Ugni Blanc (9). Homemade mixtures were elaborated from authentic monovarietal wines according to a Plackett-Burman sampling plan. After protein peak area normalization, a matrix was elaborated containing protein results of wines (mixtures and monovarietal). Partial least-squares processing was applied to this matrix allowing the elaboration of a model that provided a varietal quantification precision of around 20% for most of the grape varieties studied. The model was applied to commercial samples from various geographical origins, providing encouraging results for control purposes.

  1. Modeling the use of a binary mixture as a control scheme for two-phase thermal systems

    NASA Technical Reports Server (NTRS)

    Benner, S. M.; Costello, Frederick A.

    1990-01-01

    Two-phase thermal loops using mechanical pumps, capillary pumps, or a combination of the two have been chosen as the main heat transfer systems for the space station. For these systems to operate optimally, the flow rate in the loop should be controlled in response to the vapor/liquid ratio leaving the evaporator. By substituting a mixture of two non-azeotropic fluids in place of the single fluid normally used in these systems, it may be possible to monitor the temperature of the exiting vapor and determine the vapor/liquid ratio. The flow rate would then be adjusted to maximize the load capability with minimum energy input. A FLUINT model was developed to study the system dynamics of a hybrid capillary pumped loop using this type of control and was found to be stable under all the test conditions.

  2. A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. I. Theory.

    PubMed

    Schuff, M M; Gore, J P; Nauman, E A

    2013-05-01

    In order to better understand the mechanisms governing transport of drugs, nanoparticle-based treatments, and therapeutic biomolecules, and the role of the various physiological parameters, a number of mathematical models have previously been proposed. The limitations of the existing transport models indicate the need for a comprehensive model that includes transport in the vessel lumen, the vessel wall, and the interstitial space and considers the effects of the solute concentration on fluid flow. In this study, a general model to describe the transient distribution of fluid and multiple solutes at the microvascular level was developed using mixture theory. The model captures the experimentally observed dependence of the hydraulic permeability coefficient of the capillary wall on the concentration of solutes present in the capillary wall and the surrounding tissue. Additionally, the model demonstrates that transport phenomena across the capillary wall and in the interstitium are related to the solute concentration as well as the hydrostatic pressure. The model is used in a companion paper to examine fluid and solute transport for the simplified case of an axisymmetric geometry with no solid deformation or interconversion of mass.

  3. Combustion of Gaseous Mixtures

    NASA Technical Reports Server (NTRS)

    Duchene, R

    1932-01-01

    This report not only presents matters of practical importance in the classification of engine fuels, for which other means have proved inadequate, but also makes a few suggestions. It confirms the results of Withrow and Boyd which localize the explosive wave in the last portions of the mixture burned. This being the case, it may be assumed that the greater the normal combustion, the less the energy developed in the explosive form. In order to combat the detonation, it is therefore necessary to try to render the normal combustion swift and complete, as produced in carbureted mixtures containing benzene (benzol), in which the flame propagation, beginning at the spark, yields a progressive and pronounced darkening on the photographic film.

  4. Theoretical Calculation of the Electron Transport Parameters and Energy Distribution Function for CF3I with noble gases mixtures using Monte Carlo simulation program

    NASA Astrophysics Data System (ADS)

    Jawad, Enas A.

    2018-05-01

    In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.

  5. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.

  6. On the cause of the non-Gaussian distribution of residuals in geomagnetism

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Khokhlov, A.

    2017-12-01

    To describe errors in the data, Gaussian distributions naturally come to mind. In many practical instances, indeed, Gaussian distributions are appropriate. In the broad field of geomagnetism, however, it has repeatedly been noted that residuals between data and models often display much sharper distributions, sometimes better described by a Laplace distribution. In the present study, we make the case that such non-Gaussian behaviors are very likely the result of what is known as mixture of distributions in the statistical literature. Mixtures arise as soon as the data do not follow a common distribution or are not properly normalized, the resulting global distribution being a mix of the various distributions followed by subsets of the data, or even individual datum. We provide examples of the way such mixtures can lead to distributions that are much sharper than Gaussian distributions and discuss the reasons why such mixtures are likely the cause of the non-Gaussian distributions observed in geomagnetism. We also show that when properly selecting sub-datasets based on geophysical criteria, statistical mixture can sometimes be avoided and much more Gaussian behaviors recovered. We conclude with some general recommendations and point out that although statistical mixture always tends to sharpen the resulting distribution, it does not necessarily lead to a Laplacian distribution. This needs to be taken into account when dealing with such non-Gaussian distributions.

  7. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  8. Generalized weighted likelihood density estimators with application to finite mixture of exponential family distributions

    PubMed Central

    Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

    2010-01-01

    The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

  9. CEC-normalized clay-water sorption isotherm

    NASA Astrophysics Data System (ADS)

    Woodruff, W. F.; Revil, A.

    2011-11-01

    A normalized clay-water isotherm model based on BET theory and describing the sorption and desorption of the bound water in clays, sand-clay mixtures, and shales is presented. Clay-water sorption isotherms (sorption and desorption) of clayey materials are normalized by their cation exchange capacity (CEC) accounting for a correction factor depending on the type of counterion sorbed on the mineral surface in the so-called Stern layer. With such normalizations, all the data collapse into two master curves, one for sorption and one for desorption, independent of the clay mineralogy, crystallographic considerations, and bound cation type; therefore, neglecting the true heterogeneity of water sorption/desorption in smectite. The two master curves show the general hysteretic behavior of the capillary pressure curve at low relative humidity (below 70%). The model is validated against several data sets obtained from the literature comprising a broad range of clay types and clay mineralogies. The CEC values, derived by inverting the sorption/adsorption curves using a Markov chain Monte Carlo approach, are consistent with the CEC associated with the clay mineralogy.

  10. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    PubMed

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-09-01

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (<1%), intermediate L* (>54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The effects of temperature on nitrous oxide and oxygen mixture homogeneity and stability.

    PubMed

    Litwin, Patrick D

    2010-10-15

    For many long standing practices, the rationale for them is often lost as time passes. This is the situation with respect to the storage and handling of equimolar 50% nitrous oxide and 50% oxygen volume/volume (v/v) mixtures. A review was undertaken of existing literature to examine the developmental history of nitrous oxide and oxygen mixtures for anesthesia and analgesia and to ascertain if sufficient bibliographic data was available to support the position that the contents of a cylinder of a 50%/50% volume/volume (v/v) mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage and if justification could be found for the standard instructions given for handling before use. After ranking and removing duplicates, a total of fifteen articles were identified by the various search strategies and formed the basis of this literature review. Several studies were identified that confirmed that 50%/50% v/v mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage. The effect of temperature on the change of phase of the nitrous oxide in this mixture was further examined by several authors. These studies demonstrated that although it is possible to cause condensation and phase separation by cooling the cylinder, by allowing the cylinder to rewarm to room temperature for at least 48 hours, preferably in a horizontal orientation, and inverting it three times before use, the cylinder consistently delivered the proper proportions of the component gases as a homogenous mixture. The contents of a cylinder of a 50%/50% volume/volume (v/v) mixture of nitrous oxide and oxygen is in a homogenous single gas phase in a filled cylinder under normal conditions of handling and storage. The standard instructions given for handling before are justified based on previously conducted studies.

  12. Discrim: a computer program using an interactive approach to dissect a mixture of normal or lognormal distributions

    USGS Publications Warehouse

    Bridges, N.J.; McCammon, R.B.

    1980-01-01

    DISCRIM is an interactive computer graphics program that dissects mixtures of normal or lognormal distributions. The program was written in an effort to obtain a more satisfactory solution to the dissection problem than that offered by a graphical or numerical approach alone. It combines graphic and analytic techniques using a Tektronix1 terminal in a time-share computing environment. The main program and subroutines were written in the FORTRAN language. ?? 1980.

  13. The application of Gaussian mixture models for signal quantification in MALDI-TOF mass spectrometry of peptides.

    PubMed

    Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan

    2014-01-01

    Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.

  14. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  15. Influence of the normalized ion flux on the constitution of alumina films deposited by plasma-assisted chemical vapor deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurapov, Denis; Reiss, Jennifer; Trinh, David H.

    2007-07-15

    Alumina thin films were deposited onto tempered hot working steel substrates from an AlCl{sub 3}-O{sub 2}-Ar-H{sub 2} gas mixture by plasma-assisted chemical vapor deposition. The normalized ion flux was varied during deposition through changes in precursor content while keeping the cathode voltage and the total pressure constant. As the precursor content in the total gas mixture was increased from 0.8% to 5.8%, the deposition rate increased 12-fold, while the normalized ion flux decreased by approximately 90%. The constitution, morphology, impurity incorporation, and the elastic properties of the alumina thin films were found to depend on the normalized ion flux. Thesemore » changes in structure, composition, and properties induced by normalized ion flux may be understood by considering mechanisms related to surface and bulk diffusion.« less

  16. Relationship Between Speed of Sound in and Density of Normal and Diseased Rat Livers

    NASA Astrophysics Data System (ADS)

    Hachiya, Hiroyuki; Ohtsuki, Shigeo; Tanaka, Motonao

    1994-05-01

    Speed of sound is an important acoustic parameter for quantitative characterization of living tissues. In this paper, the relationship between speed of sound in and density of rat liver tissues are investigated. The speed of sound was measured by the nondeformable technique based on frequency-time analysis of a 3.5 MHz pulse response. The speed of sound in normal livers varied minimally between individuals and was not related to body weight or age. In liver tissues which were administered CCl4, the speed of sound was lower than the speed of sound in normal tissues. The relationship between speed of sound and density in normal, fatty and cirrhotic livers can be fitted well on the line which is estimated using the immiscible liquid model assuming a mixture of normal liver and fat tissues. For 3.5 MHz ultrasound, it is considered that the speed of sound in fresh liver with fatty degeneration is responsible for the fat content and is not strongly dependent on the degree of fibrosis.

  17. 241Am Ingrowth and Its Effect on Internal Dose

    DOE PAGES

    Konzen, Kevin

    2016-07-01

    Generally, plutonium has been manufactured to support commercial and military applications involving heat sources, weapons and reactor fuel. This work focuses on three typical plutonium mixtures, while observing the potential of 241Am ingrowth and its effect on internal dose. The term “ingrowth” is used to describe 241Am production due solely from the decay of 241Pu as part of a plutonium mixture, where it is initially absent or present in a smaller quantity. Dose calculation models do not account for 241Am ingrowth unless the 241Pu quantity is specified. This work suggested that 241Am ingrowth be considered in bioassay analysis when theremore » is a potential of a 10% increase to the individual’s committed effective dose. It was determined that plutonium fuel mixtures, initially absent of 241Am, would likely exceed 10% for typical reactor grade fuel aged less than 30 years; however, heat source grade and aged weapons grade fuel would normally fall below this threshold. In conclusion, although this work addresses typical plutonium mixtures following separation, it may be extended to irradiated commercial uranium fuel and is expected to be a concern in the recycling of spent fuel.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konzen, Kevin

    Generally, plutonium has been manufactured to support commercial and military applications involving heat sources, weapons and reactor fuel. This work focuses on three typical plutonium mixtures, while observing the potential of 241Am ingrowth and its effect on internal dose. The term “ingrowth” is used to describe 241Am production due solely from the decay of 241Pu as part of a plutonium mixture, where it is initially absent or present in a smaller quantity. Dose calculation models do not account for 241Am ingrowth unless the 241Pu quantity is specified. This work suggested that 241Am ingrowth be considered in bioassay analysis when theremore » is a potential of a 10% increase to the individual’s committed effective dose. It was determined that plutonium fuel mixtures, initially absent of 241Am, would likely exceed 10% for typical reactor grade fuel aged less than 30 years; however, heat source grade and aged weapons grade fuel would normally fall below this threshold. In conclusion, although this work addresses typical plutonium mixtures following separation, it may be extended to irradiated commercial uranium fuel and is expected to be a concern in the recycling of spent fuel.« less

  19. Co-pyrolysis kinetics of sewage sludge and bagasse using multiple normal distributed activation energy model (M-DAEM).

    PubMed

    Lin, Yan; Chen, Zhihao; Dai, Minquan; Fang, Shiwen; Liao, Yanfen; Yu, Zhaosheng; Ma, Xiaoqian

    2018-07-01

    In this study, the kinetic models of bagasse, sewage sludge and their mixture were established by the multiple normal distributed activation energy model. Blending with sewage sludge, the initial temperature declined from 437 K to 418 K. The pyrolytic species could be divided into five categories, including analogous hemicelluloses I, hemicelluloses II, cellulose, lignin and bio-char. In these species, the average activation energies and the deviations situated at reasonable ranges of 166.4673-323.7261 kJ/mol and 0.1063-35.2973 kJ/mol, respectively, which were conformed to the references. The kinetic models were well matched to experimental data, and the R 2 were greater than 99.999%y. In the local sensitivity analysis, the distributed average activation energy had stronger effect on the robustness than other kinetic parameters. And the content of pyrolytic species determined which series of kinetic parameters were more important. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Sensitivity of chloride efflux vs. transepithelial measurements in mixed CF and normal airway epithelial cell populations.

    PubMed

    Illek, Beate; Lei, Dachuan; Fischer, Horst; Gruenert, Dieter C

    2010-01-01

    While the Cl(-) efflux assays are relatively straightforward, their ability to assess the efficacy of phenotypic correction in cystic fibrosis (CF) tissue or cells may be limited. Accurate assessment of therapeutic efficacy, i.e., correlating wild type CF transmembrane conductance regulator (CFTR) levels with phenotypic correction in tissue or individual cells, requires a sensitive assay. Radioactive chloride ((36)Cl) efflux was compared to Ussing chamber analysis for measuring cAMP-dependent Cl(-) transport in mixtures of human normal (16HBE14o-) and cystic fibrosis (CF) (CFTE29o- or CFBE41o-, respectively) airway epithelial cells. Cell mixtures with decreasing amounts of 16HBE14o- cells were evaluated. Efflux and Ussing chamber studies on mixed populations of normal and CF airway epithelial cells showed that, as the number of CF cells within the population was progressively increased, the cAMP-dependent Cl(-) decreased. The (36)Cl efflux assay was effective for measuring Cl(-) transport when ≥ 25% of the cells were normal. If < 25% of the cells were phenotypically wild-type (wt), the (36)Cl efflux assay was no longer reliable. Polarized CFBE41o- cells, also homozygous for the ΔF508 mutation, were used in the Ussing chamber studies. Ussing analysis detected cAMP-dependent Cl(-) currents in mixtures with ≥1% wild-type cells indicating that Ussing analysis is more sensitive than (36)Cl efflux analysis for detection of functional CFTR. Assessment of CFTR function by Ussing analysis is more sensitive than (36)Cl efflux analysis. Ussing analysis indicates that cell mixtures containing 10% 16HBE14o- cells showed 40-50% of normal cAMP-dependent Cl(-) transport that drops off exponentially between 10-1% wild-type cells. Copyright © 2010 S. Karger AG, Basel.

  1. The non-trusty clown attack on model-based speaker recognition systems

    NASA Astrophysics Data System (ADS)

    Farrokh Baroughi, Alireza; Craver, Scott

    2015-03-01

    Biometric detectors for speaker identification commonly employ a statistical model for a subject's voice, such as a Gaussian Mixture Model, that combines multiple means to improve detector performance. This allows a malicious insider to amend or append a component of a subject's statistical model so that a detector behaves normally except under a carefully engineered circumstance. This allows an attacker to force a misclassification of his or her voice only when desired, by smuggling data into a database far in advance of an attack. Note that the attack is possible if attacker has access to database even for a limited time to modify victim's model. We exhibit such an attack on a speaker identification, in which an attacker can force a misclassification by speaking in an unusual voice, and replacing the least weighted component of victim's model by the most weighted competent of the unusual voice of the attacker's model. The reason attacker make his or her voice unusual during the attack is because his or her normal voice model can be in database, and by attacking with unusual voice, the attacker has the option to be recognized as himself or herself when talking normally or as the victim when talking in the unusual manner. By attaching an appropriately weighted vector to a victim's model, we can impersonate all users in our simulations, while avoiding unwanted false rejections.

  2. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  3. Contaminant source identification using semi-supervised machine learning

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel

    2018-05-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).

  4. Contaminant source identification using semi-supervised machine learning

    DOE PAGES

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    2017-11-08

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  5. Contaminant source identification using semi-supervised machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  6. Estimating wetland vegetation abundance from Landsat-8 operational land imager imagery: a comparison between linear spectral mixture analysis and multinomial logit modeling methods

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Liu, Ke

    2016-01-01

    Mapping vegetation abundance by using remote sensing data is an efficient means for detecting changes of an eco-environment. With Landsat-8 operational land imager (OLI) imagery acquired on July 31, 2013, both linear spectral mixture analysis (LSMA) and multinomial logit model (MNLM) methods were applied to estimate and assess the vegetation abundance in the Wild Duck Lake Wetland in Beijing, China. To improve mapping vegetation abundance and increase the number of endmembers in spectral mixture analysis, normalized difference vegetation index was extracted from OLI imagery along with the seven reflective bands of OLI data for estimating the vegetation abundance. Five endmembers were selected, which include terrestrial plants, aquatic plants, bare soil, high albedo, and low albedo. The vegetation abundance mapping results from Landsat OLI data were finally evaluated by utilizing a WorldView-2 multispectral imagery. Similar spatial patterns of vegetation abundance produced by both fully constrained LSMA algorithm and MNLM methods were observed: higher vegetation abundance levels were distributed in agricultural and riparian areas while lower levels in urban/built-up areas. The experimental results also indicate that the MNLM model outperformed the LSMA algorithm with smaller root mean square error (0.0152 versus 0.0252) and higher coefficient of determination (0.7856 versus 0.7214) as the MNLM model could handle the nonlinear reflection phenomenon better than the LSMA with mixed pixels.

  7. Admixture analysis of age at onset in first episode bipolar disorder.

    PubMed

    Nowrouzi, Behdin; McIntyre, Roger S; MacQueen, Glenda; Kennedy, Sidney H; Kennedy, James L; Ravindran, Arun; Yatham, Lakshmi; De Luca, Vincenzo

    2016-09-01

    Many studies have used the admixture analysis to separate age-at-onset (AAO) subgroups in bipolar disorder, but none of them examined first episode patients. The purpose of this study was to investigate the influence of clinical variables on AAO in first episode bipolar patients. The admixture analysis was applied to identify the model best fitting the observed AAO distribution of a sample of 194 patients with DSM-IV diagnosis of bipolar disorder and the finite mixture model was applied to assess the effect of clinical covariates on AAO. Using the BIC method, the model that was best fitting the observed distribution of AAO was a mixture of three normal distributions. We identified three AAO groups: early age-at-onset (EAO) (µ=18.0, σ=2.88), intermediate-age-at-onset (IAO) (µ=28.7, σ=3.5), and late-age-at-onset (LAO) (µ=47.3, σ=7.8), comprising 69%, 22%, and 9% of the sample respectively. Our first episode sample distribution model was significantly different from most of the other studies that applied the mixture analysis. The main limitation is that our sample may have inadequate statistical power to detect the clinical associations with the AAO subgroups. This study confirms that bipolar disorder can be classified into three groups based on AAO distribution. The data reported in our paper provide more insight into the diagnostic heterogeneity of bipolar disorder across the three AAO subgroups. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Comparing Factor, Class, and Mixture Models of Cannabis Initiation and DSM Cannabis Use Disorder Criteria, Including Craving, in the Brisbane Longitudinal Twin Study

    PubMed Central

    Kubarych, Thomas S.; Kendler, Kenneth S.; Aggen, Steven H.; Estabrook, Ryne; Edwards, Alexis C.; Clark, Shaunna L.; Martin, Nicholas G.; Hickie, Ian B.; Neale, Michael C.; Gillespie, Nathan A.

    2014-01-01

    Accumulating evidence suggests that the Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria for cannabis abuse and dependence are best represented by a single underlying factor. However, it remains possible that models with additional factors, or latent class models or hybrid models, may better explain the data. Using structured interviews, 626 adult male and female twins provided complete data on symptoms of cannabis abuse and dependence, plus a craving criterion. We compared latent factor analysis, latent class analysis, and factor mixture modeling using normal theory marginal maximum likelihood for ordinal data. Our aim was to derive a parsimonious, best-fitting cannabis use disorder (CUD) phenotype based on DSM-IV criteria and determine whether DSM-5 craving loads onto a general factor. When compared with latent class and mixture models, factor models provided a better fit to the data. When conditioned on initiation and cannabis use, the association between criteria for abuse, dependence, withdrawal, and craving were best explained by two correlated latent factors for males and females: a general risk factor to CUD and a factor capturing the symptoms of social and occupational impairment as a consequence of frequent use. Secondary analyses revealed a modest increase in the prevalence of DSM-5 CUD compared with DSM-IV cannabis abuse or dependence. It is concluded that, in addition to a general factor with loadings on cannabis use and symptoms of abuse, dependence, withdrawal, and craving, a second clinically relevant factor defined by features of social and occupational impairment was also found for frequent cannabis use. PMID:24588857

  9. Effective chemotherapy of heterogeneous and drug-resistant early colon cancers by intermittent dose schedules: a computer simulation study.

    PubMed

    Axelrod, David E; Vedula, Sudeepti; Obaniyi, James

    2017-05-01

    The effectiveness of cancer chemotherapy is limited by intra-tumor heterogeneity, the emergence of spontaneous and induced drug-resistant mutant subclones, and the maximum dose to which normal tissues can be exposed without adverse side effects. The goal of this project was to determine if intermittent schedules of the maximum dose that allows colon crypt maintenance could overcome these limitations, specifically by eliminating mixtures of drug-resistant mutants from heterogeneous early colon adenomas while maintaining colon crypt function. A computer model of cell dynamics in human colon crypts was calibrated with measurements of human biopsy specimens. The model allowed simulation of continuous and intermittent dose schedules of a cytotoxic chemotherapeutic drug, as well as the drug's effect on the elimination of mutant cells and the maintenance of crypt function. Colon crypts can tolerate a tenfold greater intermittent dose than constant dose. This allows elimination of a mixture of relatively drug-sensitive and drug-resistant mutant subclones from heterogeneous colon crypts. Mutants can be eliminated whether they arise spontaneously or are induced by the cytotoxic drug. An intermittent dose, at the maximum that allows colon crypt maintenance, can be effective in eliminating a heterogeneous mixture of mutant subclones before they fill the crypt and form an adenoma.

  10. [Use of the Six Sigma methodology for the preparation of parenteral nutrition mixtures].

    PubMed

    Silgado Bernal, M F; Basto Benítez, I; Ramírez García, G

    2014-04-01

    To use the tools of the Six Sigma methodology for the statistical control in the elaboration of parenteral nutrition mixtures at the critical checkpoint of specific density. Between August of 2010 and September of 2013, specific density analysis was performed to 100% of the samples, and the data were divided in two groups, adults and neonates. The percentage of acceptance, the trend graphs, and the sigma level were determined. A normality analysis was carried out by using the Shapiro Wilk test and the total percentage of mixtures within the specification limits was calculated. The specific density data between August of 2010 and September of 2013 comply with the normality test (W = 0.94) and show improvement in sigma level through time, reaching 6/6 in adults and 3.8/6 in neonates. 100% of the mixtures comply with the specification limits for adults and neonates, always within the control limits during the process. The improvement plans together with the Six Sigma methodology allow controlling the process, and warrant the agreement between the medical prescription and the content of the mixture. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  11. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  12. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  13. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  14. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  15. Early Menarche and Gestational Diabetes Mellitus at First Live Birth.

    PubMed

    Shen, Yun; Hu, Hui; D Taylor, Brandie; Kan, Haidong; Xu, Xiaohui

    2017-03-01

    To examine the association between early menarche and gestational diabetes mellitus (GDM). Data from the National Health and Nutrition Examination Survey 2007-2012 were used to investigate the association between age at menarche and the risk of GDM at first birth among 5914 women. A growth mixture model was used to detect distinctive menarche onset patterns based on self-reported age at menarche. Logistic regression models were then used to examine the associations between menarche initiation patterns and GDM after adjusting for sociodemographic factors, family history of diabetes mellitus, lifetime greatest Body Mass Index, smoking status, and physical activity level. Among the 5914 first-time mothers, 3.4 % had self-reported GDM. We detected three groups with heterogeneous menarche onset patterns, the Early, Normal, and Late Menarche Groups. The regression model shows that compared to the Normal Menarche Group, the Early Menarche Group had 1.75 (95 % CI 1.10, 2.79) times the odds of having GDM. No statistically significant difference was observed between the Normal and the Late Menarche Group. This study suggests that early menarche may be a risk factor of GDM. Future studies are warranted to examine and confirm this finding.

  16. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  17. Temporal and spatial patterns in vegetation and atmospheric properties from AVIRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, D.A.; Green, R.O.; Adams, J.B.

    1997-12-01

    Little research has focused on the use of imaging spectrometry for change detection. In this paper, the authors apply Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to the monitoring of seasonal changes in atmospheric water vapor, liquid water, and surface cover in the vicinity of the Jasper Ridge, CA, for three dates in 1992. Apparent surface reflectance was retrieved and water vapor and liquid water mapped by using a radiative-transfer-based inversion that accounts for spatially variable atmospheres. Spectral mixture analysis (SMA) was used to model reflectance data as mixtures of green vegetation (GV), nonphotosynthetic vegetation (NPV), soil, and shade. Temporal andmore » spatial patterns in endmember fractions and liquid water were compared to the normalized difference vegetation index (NDVI). The reflectance retrieval algorithm was tested by using a temporally invariant target.« less

  18. A hybrid expectation maximisation and MCMC sampling algorithm to implement Bayesian mixture model based genomic prediction and QTL mapping.

    PubMed

    Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J

    2016-09-21

    Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in terms of prediction accuracy, but with up to 17 times faster than the full MCMC implementations. The HyB_BR algorithm makes simultaneous genomic prediction, QTL mapping and inference of genetic architecture feasible in large genomic data sets.

  19. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  20. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  1. Mathematical modeling of erythrocyte chimerism informs genetic intervention strategies for sickle cell disease.

    PubMed

    Altrock, Philipp M; Brendel, Christian; Renella, Raffaele; Orkin, Stuart H; Williams, David A; Michor, Franziska

    2016-09-01

    Recent advances in gene therapy and genome-engineering technologies offer the opportunity to correct sickle cell disease (SCD), a heritable disorder caused by a point mutation in the β-globin gene. The developmental switch from fetal γ-globin to adult β-globin is governed in part by the transcription factor (TF) BCL11A. This TF has been proposed as a therapeutic target for reactivation of γ-globin and concomitant reduction of β-sickle globin. In this and other approaches, genetic alteration of a portion of the hematopoietic stem cell (HSC) compartment leads to a mixture of sickling and corrected red blood cells (RBCs) in periphery. To reverse the sickling phenotype, a certain proportion of corrected RBCs is necessary; the degree of HSC alteration required to achieve a desired fraction of corrected RBCs remains unknown. To address this issue, we developed a mathematical model describing aging and survival of sickle-susceptible and normal RBCs; the former can have a selective survival advantage leading to their overrepresentation. We identified the level of bone marrow chimerism required for successful stem cell-based gene therapies in SCD. Our findings were further informed using an experimental mouse model, where we transplanted mixtures of Berkeley SCD and normal murine bone marrow cells to establish chimeric grafts in murine hosts. Our integrative theoretical and experimental approach identifies the target frequency of HSC alterations required for effective treatment of sickling syndromes in humans. Our work replaces episodic observations of such target frequencies with a mathematical modeling framework that covers a large and continuous spectrum of chimerism conditions. Am. J. Hematol. 91:931-937, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Effect of roll compaction on granule size distribution of microcrystalline cellulose–mannitol mixtures: computational intelligence modeling and parametric analysis

    PubMed Central

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905

  3. Effect of roll compaction on granule size distribution of microcrystalline cellulose-mannitol mixtures: computational intelligence modeling and parametric analysis.

    PubMed

    Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander

    2017-01-01

    Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.

  4. Growth mixture modelling in families of the Framingham Heart Study

    PubMed Central

    2009-01-01

    Growth mixture modelling, a less explored method in genetic research, addresses unobserved heterogeneity in population samples. We applied this technique to longitudinal data of the Framingham Heart Study. We examined systolic blood pressure (BP) measures in 1060 males from 692 families and detected three subclasses, which varied significantly in their developmental trajectories over time. The first class consisted of 60 high-risk individuals with elevated BP early in life and a steep increase over time. The second group of 131 individuals displayed first normal BP, but showed a significant increase over time and reached high BP values late in their life time. The largest group of 869 individuals could be considered a normative group with normal BP on all exams. To identify genetic modulators for this phenotype, we tested 2,340 single-nucleotide polymorphisms on chromosome 8 for association with the class membership probabilities of our model. The probability of being in Class 1 was significantly associated with a very rare variant (rs1445404) present in only four individuals from four different families located in the coding region of the gene EYA (eyes absent homolog 1 in Drosophila) (p = 1.39 × 10-13). Mutations in EYA are known to cause brachio-oto-renal syndrome, as well as isolated renal malformations. Renal malformations could cause high BP early in life. This result awaits replication; however, it suggests that analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. PMID:20017979

  5. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  6. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  7. Effect of clay content and mineralogy on frictional sliding behavior of simulated gouges: binary and ternary mixtures of quartz, illite, and montmorillonite

    USGS Publications Warehouse

    Tembe, Sheryl; Lockner, David A.; Wong, Teng-Fong

    2010-01-01

    We investigated the frictional sliding behavior of simulated quartz-clay gouges under stress conditions relevant to seismogenic depths. Conventional triaxial compression tests were conducted at 40 MPa effective normal stress on saturated saw cut samples containing binary and ternary mixtures of quartz, montmorillonite, and illite. In all cases, frictional strengths of mixtures fall between the end-members of pure quartz (strongest) and clay (weakest). The overall trend was a decrease in strength with increasing clay content. In the illite/quartz mixture the trend was nearly linear, while in the montmorillonite mixtures a sigmoidal trend with three strength regimes was noted. Microstructural observations were performed on the deformed samples to characterize the geometric attributes of shear localization within the gouge layers. Two micromechanical models were used to analyze the critical clay fractions for the two-regime transitions on the basis of clay porosity and packing of the quartz grains. The transition from regime 1 (high strength) to 2 (intermediate strength) is associated with the shift from a stress-supporting framework of quartz grains to a clay matrix embedded with disperse quartz grains, manifested by the development of P-foliation and reduction in Riedel shear angle. The transition from regime 2 (intermediate strength) to 3 (low strength) is attributed to the development of shear localization in the clay matrix, occurring only when the neighboring layers of quartz grains are separated by a critical clay thickness. Our mixture data relating strength degradation to clay content agree well with strengths of natural shear zone materials obtained from scientific deep drilling projects.

  8. Analysis of Spin Financial Market by GARCH Model

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2013-08-01

    A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.

  9. Rapid and effective decontamination of chlorophenol-contaminated soil by sorption into commercial polymers: concept demonstration and process modeling.

    PubMed

    Tomei, M Concetta; Mosca Angelucci, Domenica; Ademollo, Nicoletta; Daugulis, Andrew J

    2015-03-01

    Solid phase extraction performed with commercial polymer beads to treat soil contaminated by chlorophenols (4-chlorophenol, 2,4-dichlorophenol and pentachlorophenol) as single compounds and in a mixture has been investigated in this study. Soil-water-polymer partition tests were conducted to determine the relative affinities of single compounds in soil-water and polymer-water pairs. Subsequent soil extraction tests were performed with Hytrel 8206, the polymer showing the highest affinity for the tested chlorophenols. Factors that were examined were polymer type, moisture content, and contamination level. Increased moisture content (up to 100%) improved the extraction efficiency for all three compounds. Extraction tests at this upper level of moisture content showed removal efficiencies ≥70% for all the compounds and their ternary mixture, for 24 h of contact time, which is in contrast to the weeks and months, normally required for conventional ex situ remediation processes. A dynamic model characterizing the rate and extent of decontamination was also formulated, calibrated and validated with the experimental data. The proposed model, based on the simplified approach of "lumped parameters" for the mass transfer coefficients, provided very good predictions of the experimental data for the absorptive removal of contaminants from soil at different individual solute levels. Parameters evaluated from calibration by fitting of single compound data, have been successfully applied to predict mixture data, with differences between experimental and predicted data in all cases being ≤3%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Materials Related Forensic Analysis and Special Testing : Drying Shrinkage Evaluation of Bridge Decks with Class AAA and Class W/WD Type K Cement

    DOT National Transportation Integrated Search

    2001-07-01

    This work pertains to preparation of concrete drying shrinkage data for proposed concrete mixtures during normal concrete : trial batch verification. Selected concrete mixtures will include PennDOT Classes AAA and AA and will also include the use of ...

  11. Survival and growth of trees and shrubs on different lignite minesoils in Louisiana

    Treesearch

    James D. Haywood; Allan E. Tiarks; James P. Barnett

    1993-01-01

    In 1980, an experimental opencast lignite mine was developed to compare redistributed A horizon with three minesoil mixtures as growth media for woody plants. The three minesoil mixtures contained different amounts and types of overburden materials, and normal reclamation practices were followed. Loblolly pine (Pinus taeda, L.), sawtooth oak (

  12. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  13. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  14. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  15. Baseline Correction of Diffuse Reflection Near-Infrared Spectra Using Searching Region Standard Normal Variate (SRSNV).

    PubMed

    Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro

    2015-12-01

    An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.

  16. A mixture of extracts from Peruvian plants (black maca and yacon) improves sperm count and reduced glycemia in mice with streptozotocin-induced diabetes.

    PubMed

    Gonzales, Gustavo F; Gonzales-Castañeda, Cynthia; Gasco, Manuel

    2013-09-01

    We investigated the effect of two extracts from Peruvian plants given alone or in a mixture on sperm count and glycemia in streptozotocin-diabetic mice. Normal or diabetic mice were divided in groups receiving vehicle, black maca (Lepidium meyenii), yacon (Smallanthus sonchifolius) or three mixtures of extracts black maca/yacon (90/10, 50/50 and 10/90%). Normal or diabetic mice were treated for 7 d with each extract, mixture or vehicle. Glycemia, daily sperm production (DSP), epididymal and vas deferens sperm counts in mice and polyphenol content, and antioxidant activity in each extract were assessed. Black maca (BM), yacon and the mixture of extracts reduced glucose levels in diabetic mice. Non-diabetic mice treated with BM and yacon showed higher DSP than those treated with vehicle (p < 0.05). Diabetic mice treated with BM, yacon and the mixture maca/yacon increased DSP, and sperm count in vas deferens and epididymis with respect to non-diabetic and diabetic mice treated with vehicle (p < 0.05). Yacon has 3.05 times higher polyphenol content than in maca, and this was associated with higher antioxidant activity. The combination of two extracts improved glycemic levels and male reproductive function in diabetic mice. Streptozotocin increased 1.43 times the liver weight that was reversed with the assessed plants extracts. In summary, streptozotocin-induced diabetes resulted in reduction in sperm counts and liver damage. These effects could be reduced with BM, yacon and the BM+yacon mixture.

  17. Prevention of propofol injection pain in children: a comparison of pretreatment with tramadol and propofol-lidocaine mixture.

    PubMed

    Borazan, Hale; Sahin, Osman; Kececioglu, Ahmet; Uluer, M Selcuk; Et, Tayfun; Otelcioglu, Seref

    2012-01-01

    The pain on propofol injection is considered to be a common and difficult to eliminate problem in children. In this study, we aimed to compare the efficacy of pretreatment with tramadol 1 mg.kg(-1)and propofol-lidocaine 20 mg mixture for prevention of propofol induced pain in children. One hundred and twenty ASA I-II patients undergoing orthopedic and otolaryngological surgery were included in this study and were divided into three groups with random table numbers. Group C (n=39) received normal saline placebo and Group T (n=40) received 1 mg.kg(-1) tramadol 60 sec before propofol (180 mg 1% propofol with 2 ml normal saline) whereas Group L (n=40) received normal saline placebo before propofol-lidocaine mixture (180 mg 1% propofol with 2 ml %1 lidocaine). One patient in Group C was dropped out from the study because of difficulty in inserting an iv cannula. Thus, one hundred and nineteen patients were analyzed for the study. After given the calculated dose of propofol, a blinded observer assessed the pain with a four-point behavioral scale. There were no significant differences in patient characteristics and intraoperative variables (p>0.05) except intraoperative fentanyl consumption and analgesic requirement one hr after surgery among the groups (p<0.05). Both tramadol 1 mg.kg(-1) and lidocaine 20 mg mixture significantly reduced propofol pain when compared with control group. Moderate and severe pain were found higher in control group (p<0.05). The incidence of overall pain was 79.4% in the control group, 35% in tramadol group, 25% in lidocaine group respectively (p<0.001). Pretreatment with tramadol 60 sec before propofol injection and propofol-lidocaine mixture were significantly reduced propofol injection pain when compared to placebo in children.

  18. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  20. Non-linear models for the detection of impaired cerebral blood flow autoregulation.

    PubMed

    Chacón, Max; Jara, José Luis; Miranda, Rodrigo; Katsogridakis, Emmanuel; Panerai, Ronney B

    2018-01-01

    The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model's derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired.

  1. Stability of faults with heterogeneous friction properties and effective normal stress

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul

    2018-05-01

    Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.

  2. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  3. Wavelength modulation diode laser absorption spectroscopy for high-pressure gas sensing

    NASA Astrophysics Data System (ADS)

    Sun, K.; Chao, X.; Sur, R.; Jeffries, J. B.; Hanson, R. K.

    2013-03-01

    A general model for 1 f-normalized wavelength modulation absorption spectroscopy with nf detection (i.e., WMS- nf) is presented that considers the performance of injection-current-tuned diode lasers and the reflective interference produced by other optical components on the line-of-sight (LOS) transmission intensity. This model explores the optimization of sensitive detection of optical absorption by species with structured spectra at elevated pressures. Predictions have been validated by comparison with measurements of the 1 f-normalized WMS- nf (for n = 2-6) lineshape of the R(11) transition in the 1st overtone band of CO near 2.3 μm at four different pressures ranging from 5 to 20 atm, all at room temperature. The CO mole fractions measured by 1 f-normalized WMS-2 f, 3 f, and 4 f techniques agree with calibrated mixtures within 2.0 %. At conditions where absorption features are significantly broadened and large modulation depths are required, uncertainties in the WMS background signals due to reflective interference in the optical path can produce significant error in gas mole fraction measurements by 1 f-normalized WMS-2 f. However, such potential errors can be greatly reduced by using the higher harmonics, i.e., 1 f-normalized WMS- nf with n > 2. In addition, less interference from pressure-broadened neighboring transitions has been observed for WMS with higher harmonics than for WMS-2 f.

  4. Spatial resolution of the electrical conductance of ionic fluids using a Green-Kubo method.

    PubMed

    Jones, R E; Ward, D K; Templeton, J A

    2014-11-14

    We present a Green-Kubo method to spatially resolve transport coefficients in compositionally heterogeneous mixtures. We develop the underlying theory based on well-known results from mixture theory, Irving-Kirkwood field estimation, and linear response theory. Then, using standard molecular dynamics techniques, we apply the methodology to representative systems. With a homogeneous salt water system, where the expectation of the distribution of conductivity is clear, we demonstrate the sensitivities of the method to system size, and other physical and algorithmic parameters. Then we present a simple model of an electrochemical double layer where we explore the resolution limit of the method. In this system, we observe significant anisotropy in the wall-normal vs. transverse ionic conductances, as well as near wall effects. Finally, we discuss extensions and applications to more realistic systems such as batteries where detailed understanding of the transport properties in the vicinity of the electrodes is of technological importance.

  5. Table and charts of equilibrium normal-shock properties for hydrogen-helium mixtures with velocities to 70 km/sec. Volume 1: 0.95 H2-0.05 He (by volume)

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1976-01-01

    Equilibrium thermodynamic and flow properties are presented in tabulated and graphical form for moving, standing, and reflected normal shock waves into hydrogen-helium mixtures representative of postulated outer planet atmospheres. These results are presented in four volumes and the volmetric compositions of the mixtures are 0.95H2-0.05He in Volume 1, 0.90H2-0.10He in Volume 2, 0.85H2-0.15He in Volume 3, and 0.75H2-0.25He in Volume 4. Properties include pressure, temperature, density, enthalpy, speed of sound, entropy, molecular-weight ratio, isentropic exponent, velocity, and species mole fractions. Incident (moving) shock velocities are varied from 4 to 70 km/sec for a range of initial pressure of 5 N/sq m to 100 kN/sq m. Results are applicable to shock-tube flows and for determining flow conditions behind the normal portion of the bow shock about a blunt body at high velocities in postulated outer planet atmospheres. The document is a revised version of the original edition of NASA SP-3085 published in 1974.

  6. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  7. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  8. Characterization and Separation of Cancer Cells with a Wicking Fiber Device.

    PubMed

    Tabbaa, Suzanne M; Sharp, Julia L; Burg, Karen J L

    2017-12-01

    Current cancer diagnostic methods lack the ability to quickly, simply, efficiently, and inexpensively screen cancer cells from a mixed population of cancer and normal cells. Methods based on biomarkers are unreliable due to complexity of cancer cells, plasticity of markers, and lack of common tumorigenic markers. Diagnostics are time intensive, require multiple tests, and provide limited information. In this study, we developed a novel wicking fiber device that separates cancer and normal cell types. To the best of our knowledge, no previous work has used vertical wicking of cells through fibers to identify and isolate cancer cells. The device separated mouse mammary tumor cells from a cellular mixture containing normal mouse mammary cells. Further investigation showed the device separated and isolated human cancer cells from a heterogeneous mixture of normal and cancerous human cells. We report a simple, inexpensive, and rapid technique that has potential to identify and isolate cancer cells from large volumes of liquid samples that can be translated to on-site clinic diagnosis.

  9. Internal combustion engine controls for reduced exhausts contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, D.R. Jr.

    1974-06-04

    An electrochemical control system for achieving optimum efficiency in the catalytic conversion of hydrocarbon and carbon monoxide emissions from internal combustion engines is described. The system automatically maintains catalyst temperature at a point for maximum pollutant conversion by adjusting ignition timing and fuel/air ratio during warm-up and subsequent operation. Ignition timing is retarded during engine warm-up to bring the catalytic converter to an efficient operating temperature within a minimum period of time. After the converter reaches a predetermined minimum temperature, the spark is advanced to within its normal operating range. A needle-valve adjustment during warm-up is employed to enrich themore » fuel/air mixture by approximately 10 percent. Following warm-up and attainment of a predetermined catalyst temperature, the needle valve is moved automatically to its normal position (e.g., a fuel/air ratio of 16:1). Although the normal lean mixture causes increased amounts of nitrogen oxide emissions, present NO/sub x/ converters appear capable of handling the increased emissions under normal operating conditions.« less

  10. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  11. Non-linear models for the detection of impaired cerebral blood flow autoregulation

    PubMed Central

    Miranda, Rodrigo; Katsogridakis, Emmanuel

    2018-01-01

    The ability to discriminate between normal and impaired dynamic cerebral autoregulation (CA), based on measurements of spontaneous fluctuations in arterial blood pressure (BP) and cerebral blood flow (CBF), has considerable clinical relevance. We studied 45 normal subjects at rest and under hypercapnia induced by breathing a mixture of carbon dioxide and air. Non-linear models with BP as input and CBF velocity (CBFV) as output, were implemented with support vector machines (SVM) using separate recordings for learning and validation. Dynamic SVM implementations used either moving average or autoregressive structures. The efficiency of dynamic CA was estimated from the model’s derived CBFV response to a step change in BP as an autoregulation index for both linear and non-linear models. Non-linear models with recurrences (autoregressive) showed the best results, with CA indexes of 5.9 ± 1.5 in normocapnia, and 2.5 ± 1.2 for hypercapnia with an area under the receiver-operator curve of 0.955. The high performance achieved by non-linear SVM models to detect deterioration of dynamic CA should encourage further assessment of its applicability to clinical conditions where CA might be impaired. PMID:29381724

  12. Evaluation of a speaker identification system with and without fusion using three databases in the presence of noise and handset effects

    NASA Astrophysics Data System (ADS)

    S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.

    2017-12-01

    In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.

  13. 3-D Numerical Simulation for Gas-Liquid Two-Phase Flow in Aeration Tank

    NASA Astrophysics Data System (ADS)

    Xue, R.; Tian, R.; Yan, S. Y.; Li, S.

    In the crafts of activated sludge treatment, oxygen supply and the suspending state of activated sludge are primary factors to keep biochemistry process carrying on normally. However, they are all controlled by aeration. So aeration is crucial. The paper focus on aeration, use CFD software to simulate the field of aeration tank which is designed by sludge load method. The main designed size of aeration tank is: total volume: 20 000 m3; corridor width: 8m; total length of corridors: 139m; number of corridors: 3; length of one single corridor: 48m; effective depth: 4.5m; additional depth: 0.5m. According to the similarity theory, a geometrical model is set up in proportion of 10:1. The way of liquid flow is submerge to avoid liquid flow out directly. The grid is plotted by dividing the whole computational area into two parts. The bottom part which contains gas pipe and gas exit hole and the above part which is the main area are plotted by tetrahedron and hexahedron respectively. In boundary conditions, gas is defined as the primary-phase, and liquid is defined as the secondary-phase. Choosing mixture model, two-phase flow field of aeration tank is simulated by solved the Continuity equation for the mixture, Momentum equation for the mixture, Volume fraction equation for the secondary phases and Relative velocity formula when gas velocity is 10m/s, 20m/s, 30m/s. what figure shows is the contour of velocity magnitude for the mixture phase when gas velocity is 20m/s. Through analysis, the simulation tendency is agreed with actual running of aeration tank. It is feasible to use mixture model to simulate flow field of aeration tank by fluent software. According to the simulation result, the better velocity of liquid or gas (the quantity of inlet air) can be chosen by lower cost, and also the performance of aeration tank can be forecast. It will be helpful for designing and operation.

  14. Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives

    NASA Astrophysics Data System (ADS)

    Wescott, B. L.

    2007-12-01

    The pseudo-reaction zone model was proposed to improve engineering scale simulations with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below the steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics (DSD) and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity—shock curvature relation. Cylinder test simulations predict the proper expansion to within 1% even though significant reaction occurs as the cylinder expands.

  15. Stability of the accelerated expansion in nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Mumtaz, Saadia

    2017-02-01

    This paper is devoted to the phase space analysis of an isotropic and homogeneous model of the universe by taking a noninteracting mixture of the electromagnetic and viscous radiating fluids whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. We establish an autonomous system of equations by introducing normalized dimensionless variables. In order to analyze the stability of the system, we find corresponding critical points for different values of the parameters. We also evaluate the power-law scale factor whose behavior indicates different phases of the universe in this model. It is concluded that the bulk viscosity as well as electromagnetic field enhances the stability of the accelerated expansion of the isotropic and homogeneous model of the universe.

  16. Effect of the Intermittent Hypoxia on the Bone Tissue State After Microgravitation Modeling

    NASA Astrophysics Data System (ADS)

    Berezovskiy, V. A.; Litovka, I. G.; Chaka, H. G.; Magomedov, S.; Mehed, N. V.

    The authors studied the influence of low PO2 under normal atmospheric pressure on the Ca and P metabolism, bone remodeling markers, and biomechanical properties of the femura bone in rats with their hind limbs unloaded. A hypoxic gas mixture (HGM) was given in intermittent regime A and B for 8 hours/day during 28 days. It was shown that regime A slows down the development of osteopenia and may be used in complex with other rehabilitation procedures for preventing the unloading osteopenia.

  17. Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives

    NASA Astrophysics Data System (ADS)

    Wescott, Bradley

    2007-06-01

    The pseudo-reaction zone model was proposed to improve engineering scale simulations when using Detonation Shock Dynamics with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below their steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity---shock curvature relation. The generalized pseudo-reaction zone model proposed here predicts the cylinder expansion to within 1% by accounting for the slow reaction in ANFO.

  18. Application of a fuzzy neural network model in predicting polycyclic aromatic hydrocarbon-mediated perturbations of the Cyp1b1 transcriptional regulatory network in mouse skin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larkin, Andrew; Department of Statistics, Oregon State University; Superfund Research Center, Oregon State University

    2013-03-01

    Polycyclic aromatic hydrocarbons (PAHs) are present in the environment as complex mixtures with components that have diverse carcinogenic potencies and mostly unknown interactive effects. Non-additive PAH interactions have been observed in regulation of cytochrome P450 (CYP) gene expression in the CYP1 family. To better understand and predict biological effects of complex mixtures, such as environmental PAHs, an 11 gene input-1 gene output fuzzy neural network (FNN) was developed for predicting PAH-mediated perturbations of dermal Cyp1b1 transcription in mice. Input values were generalized using fuzzy logic into low, medium, and high fuzzy subsets, and sorted using k-means clustering to create Mamdanimore » logic functions for predicting Cyp1b1 mRNA expression. Model testing was performed with data from microarray analysis of skin samples from FVB/N mice treated with toluene (vehicle control), dibenzo[def,p]chrysene (DBC), benzo[a]pyrene (BaP), or 1 of 3 combinations of diesel particulate extract (DPE), coal tar extract (CTE) and cigarette smoke condensate (CSC) using leave-one-out cross-validation. Predictions were within 1 log{sub 2} fold change unit of microarray data, with the exception of the DBC treatment group, where the unexpected down-regulation of Cyp1b1 expression was predicted but did not reach statistical significance on the microarrays. Adding CTE to DPE was predicted to increase Cyp1b1 expression, whereas adding CSC to CTE and DPE was predicted to have no effect, in agreement with microarray results. The aryl hydrocarbon receptor repressor (Ahrr) was determined to be the most significant input variable for model predictions using back-propagation and normalization of FNN weights. - Highlights: ► Tested a model to predict PAH mixture-mediated changes in Cyp1b1 expression ► Quantitative predictions in agreement with microarrays for Cyp1b1 induction ► Unexpected difference in expression between DBC and other treatments predicted ► Model predictions for combining PAH mixtures in agreement with microarrays ► Predictions highly dependent on aryl hydrocarbon receptor repressor expression.« less

  19. Amorphization and Frictional Processes in Smectite-Quartz Gouge Mixtures Sheared from Sub-seismic to Seismic Slip Rates

    NASA Astrophysics Data System (ADS)

    Aretusini, S.; Mittempergher, S.; Spagnuolo, E.; Di Toro, G.; Gualtieri, A.; Plümper, O.

    2015-12-01

    Slipping zones in shallow sections of megathrusts and large landslides are often made of smectite and quartz gouge mixtures. Experiments aimed at investigating the frictional processes operating at high slip rates (>1 m/s) may unravel the mechanics of these natural phenomena. Here we present a new dataset obtained with two rotary shear apparatus (ROSA, Padua University; SHIVA, INGV-Rome). Experiments were performed at room humidity and temperature on four mixtures of smectite (Ca-Montmorillonite) and quartz with 68, 50, 25, 0 wt% of smectite. The gouges were slid for 3 m at normal stress of 5 MPa and slip rate V from 300 µm/s to 1.5 m/s. Temperature during the experiments was monitored with four thermocouples and modeled with COMSOL Multiphysics. In smectite-rich mixtures, the friction coefficient µ evolved with slip according to three slip rate regimes: in regime 1 (V<0.1 m/s) initial slip-weakening was followed by slip-strengthening; in regime 2 (0.10.3 m/s) µ had strong slip-weakening behavior. Instead, in quartz-rich mixtures the gouge had a monotonic slip-weakening behavior, independently of V. Temperature modelling showed that the fraction of work rate converted into heat decreased with increasing smectite content and slip rate. Quantitative X-ray powder diffraction (Rietveld method) indicates that the production of amorphous material from smectite breakdown increased with frictional work but was independent of work rate. Scanning Electron Microscopy investigation evidenced strain localization and presence of dehydrated clays for V≥0.3 m/s; instead, for V<0.3 m/s, strain was distributed and the gouge layer pervasively foliated. In conclusion, amorphization of the sheared gouges was not responsible of the measured frictional weakening. Instead, slip-weakening was concomitant to strain localization and possible vaporization of water adsorbed on smectite grain surfaces.

  20. New views of granular mass flows

    USGS Publications Warehouse

    Iverson, R.M.; Vallance, J.W.

    2001-01-01

    Concentrated grain-fluid mixtures in rock avalanches, debris flows, and pyroclastic flows do not behave as simple materials with fixed rheologies. Instead, rheology evolves as mixture agitation, grain concentration, and fluid-pressure change during flow initiation, transit, and deposition. Throughout a flow, however, normal forces on planes parallel to the free upper surface approximately balance the weight of the superincumbent mixture, and the Coulomb friction rule describes bulk intergranular shear stresses on such planes. Pore-fluid pressure can temporarily or locally enhance mixture mobility by reducing Coulomb friction and transferring shear stress to the fluid phase. Initial conditions, boundary conditions, and grain comminution and sorting can influence pore-fluid pressures and cause variations in flow dynamics and deposits.

  1. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  2. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  3. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  4. Evaluation on expansive performance of the expansive soil using electrical responses

    NASA Astrophysics Data System (ADS)

    Chu, Ya; Liu, Songyu; Bate, Bate; Xu, Lei

    2018-01-01

    Light structures, such as highways and railroads, built on expansive soils are prone to damages from the swelling of their underlain soil layers. Considerable amount of research has been conducted to characterize the swelling properties of expansive soils. Current swell characterization models, however, are limited by lack of standardized tests. Electrical methods are non-destructive, and are faster and less expensive than the traditional geotechnical methods. Therefore, geo-electrical methods are attractive for defining soil characteristics, including the swelling behavior. In this study, comprehensive laboratory experiments were undertaken to measure the free swelling and electrical resistivity of the mixtures of commercial kaolinite and bentonite. The electrical conductivity of kaolinite-bentonite mixtures was measured by a self-developed four-electrode soil resistivity box. Increasing the free swelling rate of the kaolinite-bentonite mixtures (0.72 to 1 of porosity of soils samples) led to a reduction in the electrical resistivity and an increase in conductivity. A unique relationship between free swelling rate and normalized surface conductivity was constructed for expensive soils by eliminating influences of porosity and m exponent. Therefore, electrical response measurement can be used to characterize the free swelling rate of expensive soils.

  5. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  6. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  7. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  8. Excited-state proton transfer dynamics of firefly's chromophore D-luciferin in DMSO-water binary mixture.

    PubMed

    Kuchlyan, Jagannath; Banik, Debasis; Roy, Arpita; Kundu, Niloy; Sarkar, Nilmoni

    2014-12-04

    In this article we have investigated intermolecular excited-state proton transfer (ESPT) of firefly's chromophore D-luciferin in DMSO-water binary mixtures using steady-state and time-resolved fluorescence spectroscopy. The unusual behavior of DMSO-water binary mixture as reported by Bagchi et al. (J. Phys. Chem. B 2010, 114, 12875-12882) was also found using D-luciferin as intermolecular ESPT probe. The binary mixture has given evidence of its anomalous nature at low mole fractions of DMSO (below XD = 0.4) in our systematic investigation. Upon excitation of neutral D-luciferin molecule, dual fluorescence emissions (protonated and deprotonated form) are observed in DMSO-water binary mixture. A clear isoemissive point in the time-resolved area normalized emission spectra further indicates two emissive species in the excited state of D-luciferin in DMSO-water binary mixture. DMSO-water binary mixtures of different compositions are fascinating hydrogen bonding systems. Therefore, we have observed unusual changes in the fluorescence emission intensity, fluorescence quantum yield, and fluorescence lifetime of more hydrogen bonding sensitive anionic form of D-luciferin in low DMSO content of DMSO-water binary mixture.

  9. Trajectories of body mass and self-concept in black and white girls: the lingering effects of stigma.

    PubMed

    Mustillo, Sarah A; Hendrix, Kimber L; Schafer, Markus H

    2012-03-01

    As a stigmatizing condition, obesity may lead to the internalization of devalued labels and threats to self-concept. Modified labeling theory suggests that the effects of stigma may outlive direct manifestations of the discredited characteristic itself. This article considers whether obesity's effects on self-concept linger when obese youth enter the normal body mass range. Using longitudinal data from the National Growth and Health Study on 2,206 black and white girls, we estimated a parallel-process growth mixture model of body mass linked to growth models of body image discrepancy and self-esteem. We found that discrepancy was higher and self-esteem lower in formerly obese girls compared to girls always in the normal range and comparable to chronically obese girls. Neither body image discrepancy nor self-esteem rebounded in white girls despite reduction in body mass, suggesting that the effects of stigma linger. Self-esteem, but not discrepancy, did rebound in black girls.

  10. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  11. Different applications of isosbestic points, normalized spectra and dual wavelength as powerful tools for resolution of multicomponent mixtures with severely overlapping spectra.

    PubMed

    Mohamed, Ekram H; Lotfy, Hayam M; Hegazy, Maha A; Mowaka, Shereen

    2017-05-25

    Analysis of complex mixture containing three or more components represented a challenge for analysts. New smart spectrophotometric methods have been recently evolved with no limitation. A study of different novel and smart spectrophotometric techniques for resolution of severely overlapping spectra were presented in this work utilizing isosbestic points present in different absorption spectra, normalized spectra as a divisor and dual wavelengths. A quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PCT) and para-aminophenol (PAP) was taken as an example for application of the proposed techniques without any separation steps. The adopted techniques adopted of successive and progressive steps manipulating zero /or ratio /or derivative spectra. The proposed techniques includes eight novel and simple methods namely direct spectrophotometry after applying derivative transformation (DT) via multiplying by a decoding spectrum, spectrum subtraction (SS), advanced absorbance subtraction (AAS), advanced amplitude modulation (AAM), simultaneous derivative ratio (S 1 DD), advanced ratio difference (ARD), induced ratio difference (IRD) and finally double divisor-ratio difference-dual wavelength (DD-RD-DW) methods. The proposed methods were assessed by analyzing synthetic mixtures of the studied drugs. They were also successfully applied to commercial pharmaceutical formulations without interference from other dosage form additives. The methods were validated according to the ICH guidelines, accuracy, precision, repeatability, were found to be within the acceptable limits. The proposed procedures are accurate, simple and reproducible and yet economic. They are also sensitive and selective and could be used for routine analysis of complex most of the binary, ternary and quaternary mixtures and even more complex mixtures.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Tapan; Das, B. P.; Pai, Ramesh V.

    We present a scenario where a supersolid is induced in one of the components of a mixture of two species bosonic atoms where there are no long-range interactions. We study a system of normal and hard-core boson mixture with only the former possessing long-range interactions. We consider three cases: the first where the total density is commensurate and the other two where it is incommensurate to the lattice. By suitable choices of the densities of normal and hard-core bosons and the interaction strengths between them, we predict that the charge density wave and the supersolid orders can be induced inmore » the hard-core species as a result of the competing interatomic interactions.« less

  13. Tables and charts of equilibrium normal shock and shock-tube solutions for helium-hydrogen mixtures with velocities to 70 km/sec

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1974-01-01

    Equilibrium thermodynamic and flow properties are presented in tabulated and graphical form for moving, standing, and reflected normal shock waves into helium-hydrogen mixtures representative of proposed outer planet atmospheres. The volumetric compositions of these mixtures are 0.35He-0.65H2, 0.20He-0.80H2, and 0.05He-0.95H2. Properties include pressure, temperature, density, enthalpy, speed of sound, entropy, molecular-weight ratio, isentropic exponent, velocity, and species mole fractions. Incident (moving) shock velocities are varied from 4 to 70 km/sec for a range of initial pressure of 5 N/sq m to 100 kN/sq m. The present results are applicable to shock-tube flows and to free-flight conditions for a blunt body at high velocities. A working chart illustrating idealized shock-tube performance with a 0.20He-0.80H2 test gas and heated helium driver gas is also presented.

  14. Assessment of the individual and mixture toxicity of cadmium, copper and oxytetracycline, on the embryo-larval development of the sea urchin Paracentrotus lividus.

    PubMed

    Gharred, Tahar; Jebali, Jamel; Belgacem, Mariem; Mannai, Rabeb; Achour, Sami

    2016-09-01

    Multiple pollutions by trace metals and pharmaceuticals have become one of the most important problems in marine coastal areas because of its excessive toxicity on organisms living in this area. This study aimed to assess the individual and mixture toxicity of Cu, Cd, and oxytetracycline frequently existing in the contaminated marine areas and the embryo-larval development of the sea urchin Paracentrotus lividus. The individual contamination of the spermatozoid for 1 h with the increasing concentrations of Cd, Cu, and OTC decreases the fertility rate and increases larvae anomalies in the order Cu > Cd > OTC. Moreover, the normal larva frequency and the length of spicules were more sensitive than the fertilization rate and normal gastrula frequency endpoints. The mixture toxicity assessed by multiple experimental designs showed clearly that concentrations of Cd, Cu, and OTC superior to 338 μg/L, 0.56 μg/L, and 0.83 mg/L, respectively, cause significant larva malformations.

  15. The detection of cancer in living tissue with single-cell precision and the development of a system for targeted drug delivery to cancer

    NASA Astrophysics Data System (ADS)

    Fields, Adam; Pi, Sean; Ramek, Alex; Bernheim, Taylor; Fields, Jessica; Pernodet, Nadine; Rafailovich, Miriam

    2007-03-01

    The development of innovations in the field of cancer diagnostics is imperative to improve the early identification of malignant cells within the human body. Two novel techniques are presented for the detection of cancer cells in living tissue. First, shear modulation force microscopy (SMFM) was employed to measure cell mechanics of normal and cancer cells in separate and mixed tissue cultures. We found that the moduli of normal keratinocytes were twice as high as the moduli of SCC cancerous keratinocytes, and that the cancer cells were unambiguously identifiable from a mixture of both kinds of cells. Second, confocal microscopy and the BIAcore 2000 were used to demonstrate the preferential adhesion of glass micro-beads impregnated with fluorescent dye to the membranes of cancer cells as compared to those of normal cells. In addition to their use as a cancer detection system, these hollow and porous beads present a model system for targeted drug delivery in the treatment of cancer.

  16. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  17. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  18. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  19. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  20. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  1. Detection of Pathological Voice Using Cepstrum Vectors: A Deep Learning Approach.

    PubMed

    Fang, Shih-Hau; Tsao, Yu; Hsiao, Min-Jing; Chen, Ji-Ying; Lai, Ying-Hui; Lin, Feng-Chuan; Wang, Chi-Te

    2018-03-19

    Computerized detection of voice disorders has attracted considerable academic and clinical interest in the hope of providing an effective screening method for voice diseases before endoscopic confirmation. This study proposes a deep-learning-based approach to detect pathological voice and examines its performance and utility compared with other automatic classification algorithms. This study retrospectively collected 60 normal voice samples and 402 pathological voice samples of 8 common clinical voice disorders in a voice clinic of a tertiary teaching hospital. We extracted Mel frequency cepstral coefficients from 3-second samples of a sustained vowel. The performances of three machine learning algorithms, namely, deep neural network (DNN), support vector machine, and Gaussian mixture model, were evaluated based on a fivefold cross-validation. Collective cases from the voice disorder database of MEEI (Massachusetts Eye and Ear Infirmary) were used to verify the performance of the classification mechanisms. The experimental results demonstrated that DNN outperforms Gaussian mixture model and support vector machine. Its accuracy in detecting voice pathologies reached 94.26% and 90.52% in male and female subjects, based on three representative Mel frequency cepstral coefficient features. When applied to the MEEI database for validation, the DNN also achieved a higher accuracy (99.32%) than the other two classification algorithms. By stacking several layers of neurons with optimized weights, the proposed DNN algorithm can fully utilize the acoustic features and efficiently differentiate between normal and pathological voice samples. Based on this pilot study, future research may proceed to explore more application of DNN from laboratory and clinical perspectives. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. ACTIVE SUPPRESSION OF IMMUNOGLOBULIN ALLOTYPE SYNTHESIS

    PubMed Central

    Herzenberg, Leonore A.; Chan, Eva L.; Ravitch, Myrnice M.; Riblet, Roy J.; Herzenberg, Leonard A.

    1973-01-01

    Thymus-derived cells (T cells) that actively suppress production of IgG2a immunoglobulins carrying the Ig-1b allotype have been found in adult (SJL x BALB/c)F1 mice exposed to anti-Ig-1b early in life. The suppression is specific for Ig-1b. The allelic product, Ig-1a, is unaffected. Spleen, lymph node, bone marrow, or thymus cells from suppressed mice suppress production of Ig-1b by syngeneic spleen cells from normal F1 mice. When a mixture of suppressed and normal cells is transferred into lethally irradiated BALB/c mice, there is a short burst of Ig-1b production after which Ig-1b levels in the recipient fall rapidly below detectability. Pretreatment of the cells from the suppressed mice with antiserum specific for T cells (anti-Thy-1b) plus complement before mixture destroys the suppressing activity. Similar results with suppressor cells were obtained in vitro using Mishell-Dutton cultures. Mixture of spleen cells from suppressed animals with sheep erythrocyte (SRBC)-primed syngeneic normal spleen before culture suppresses Ig-1b plaque-forming cell (PFC) formation while leaving Ig-1a PFC unaffected. Treatment of the suppressed spleen with anti-Thy-1b before transfer removes the suppressing activity. PMID:4541122

  3. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  4. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  5. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  6. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Randomized structured triglycerides increase lymphatic absorption of tocopherol and retinol compared with the equivalent physical mixture in a rat model of fat malabsorption.

    PubMed

    Tso, P; Lee, T; DeMichele, S J

    2001-08-01

    Previously we demonstrated that the digestion, absorption and lymphatic transport of lipid and key essential fatty acids (EFA) from randomly interesterified fish oil/medium-chain structured triglycerides (STG) were significantly higher than an equivalent physical mixture (PM) in a normal lymph fistula rat model and in a rat model of lipid malabsorption caused by ischemia/reperfusion (I/R) injury. The goals of this study were to further explore the potential absorptive benefits of STG by comparing the intestinal absorption and lymphatic transport of tocopherol and retinol when delivered gastrically with either STG or PM under normal conditions and after I/R injury to the small bowel. Food-deprived male Sprague-Dawley rats were randomly assigned to two treatments (sham controls or I/R). Under halothane anesthesia, the superior mesenteric artery (SMA) was occluded for 20 min and then reperfused in I/R rats. The SMA was isolated but not occluded in control rats. In both groups, the mesenteric lymph duct was cannulated and a gastric tube was inserted. Each treatment group received 1 mL of the fish oil/MCT STG or PM (7 rats/group) along with (14)C-alpha-tocopherol and (3)H-retinol through the gastric tube followed by an infusion of PBS at 3 mL/h for 8 h. Lymph was collected hourly for 8 h. Under steady-state conditions, the amount of (14)C-alpha-tocopherol and (3)H-retinol transported into lymph was significantly higher in the STG-fed rats compared with those fed PM in both control and I/R groups. In addition, control and I/R rats given STG had earlier steady-state outputs of (14)C-alpha-tocopherol and (3)H-retinol and maintained approximately 30% higher outputs in lymph throughout the 8-h lymph collection period compared with rats given the PM. We conclude that STG provides the opportunity to potentiate improved absorption of fat-soluble vitamins under normal and malabsorptive states.

  8. Fabrication of metallized nanoporous films from the self-assembly of a block copolymer and homopolymer mixture.

    PubMed

    Li, Xue; Zhao, Shuying; Zhang, Shuxiang; Kim, Dong Ha; Knoll, Wolfgang

    2007-06-19

    Inorganic compound HAuCl4, which can form a complex with pyridine, is introduced into a poly(styrene-block-2-vinylpyridine) (PS-b-P2VP) block copolymer/poly(methyl methacrylate) (PMMA) homopolymer mixture. The orientation of the cylindrical microdomains formed by the P2VP block, PMMA, and HAuCl4 normal to the substrate surface can be generated via cooperative self-assembly of the mixture. Selective removal of the homopolymer can lead to porous nanostructures containing metal components in P2VP domains, which have a novel photoluminescence property.

  9. Flame Speeds and Energy Considerations for Explosions in a Spherical Bomb

    NASA Technical Reports Server (NTRS)

    Fiock, Ernest F; Marvin, Charles F , Jr; Caldwell, Frank R; Roeder, Carl H

    1940-01-01

    Simultaneous measurements were made of the speed of flame and the rise in pressure during explosions of mixtures of carbon monoxide, normal heptane, iso-octane, and benzene in a 10-inch spherical bomb with central ignition. From these records, fundamental properties of the explosive mixtures, which are independent of the apparatus, were computed. The transformation velocity, or speed at which flame advances into and transforms the explosive mixture, increases with both the temperature and the pressure of the unburned gas. The rise in pressure was correlated with the mass of charge inflamed to show the course of the energy developed.

  10. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  12. Prevention of Propofol Injection Pain in Children: A Comparison of Pretreatment with Tramadol and Propofol-Lidocaine Mixture

    PubMed Central

    Borazan, Hale; Sahin, Osman; Kececioglu, Ahmet; Uluer, M.Selcuk; Et, Tayfun; Otelcioglu, Seref

    2012-01-01

    Background: The pain on propofol injection is considered to be a common and difficult to eliminate problem in children. In this study, we aimed to compare the efficacy of pretreatment with tramadol 1 mg.kg-1and propofol-lidocaine 20 mg mixture for prevention of propofol induced pain in children. Methods: One hundred and twenty ASA I-II patients undergoing orthopedic and otolaryngological surgery were included in this study and were divided into three groups with random table numbers. Group C (n=39) received normal saline placebo and Group T (n=40) received 1 mg.kg-1 tramadol 60 sec before propofol (180 mg 1% propofol with 2 ml normal saline) whereas Group L (n=40) received normal saline placebo before propofol-lidocaine mixture (180 mg 1% propofol with 2 ml %1 lidocaine). One patient in Group C was dropped out from the study because of difficulty in inserting an iv cannula. Thus, one hundred and nineteen patients were analyzed for the study. After given the calculated dose of propofol, a blinded observer assessed the pain with a four-point behavioral scale. Results: There were no significant differences in patient characteristics and intraoperative variables (p>0.05) except intraoperative fentanyl consumption and analgesic requirement one hr after surgery among the groups (p<0.05). Both tramadol 1 mg.kg-1 and lidocaine 20 mg mixture significantly reduced propofol pain when compared with control group. Moderate and severe pain were found higher in control group (p<0.05). The incidence of overall pain was 79.4% in the control group, 35% in tramadol group, 25% in lidocaine group respectively (p<0.001). Conclusions: Pretreatment with tramadol 60 sec before propofol injection and propofol-lidocaine mixture were significantly reduced propofol injection pain when compared to placebo in children. PMID:22927775

  13. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Chemical-agnostic hazard prediction: statistical inference of in vitro toxicity pathways from proteomics responses to chemical mixtures

    EPA Science Inventory

    Toxicity pathways have been defined as normal cellular pathways that, when sufficiently perturbed as a consequence of chemical exposure, lead to an adverse outcome. If an exposure alters one or more normal biological pathways to an extent that leads to an adverse toxicity outcome...

  15. Modelling carotid artery adaptations to dynamic alterations in pressure and flow over the cardiac cycle

    PubMed Central

    Cardamone, L.; Valentín, A.; Eberth, J. F.; Humphrey, J. D.

    2010-01-01

    Motivated by recent clinical and laboratory findings of important effects of pulsatile pressure and flow on arterial adaptations, we employ and extend an established constrained mixture framework of growth (change in mass) and remodelling (change in structure) to include such dynamical effects. New descriptors of cell and tissue behavior (constitutive relations) are postulated and refined based on new experimental data from a transverse aortic arch banding model in the mouse that increases pulsatile pressure and flow in one carotid artery. In particular, it is shown that there was a need to refine constitutive relations for the active stress generated by smooth muscle, to include both stress- and stress rate-mediated control of the turnover of cells and matrix and to account for a cyclic stress-mediated loss of elastic fibre integrity and decrease in collagen stiffness in order to capture the reported evolution, over 8 weeks, of luminal radius, wall thickness, axial force and in vivo axial stretch of the hypertensive mouse carotid artery. We submit, therefore, that complex aspects of adaptation by elastic arteries can be predicted by constrained mixture models wherein individual constituents are produced or removed at individual rates and to individual extents depending on changes in both stress and stress rate from normal values. PMID:20484365

  16. Modeling of First-Passage Processes in Financial Markets

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico

    2010-03-01

    In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.

  17. Patterns of glaucomatous visual field loss in sita fields automatically identified using independent component analysis.

    PubMed

    Goldbaum, Michael H; Jang, Gil-Jin; Bowd, Chris; Hao, Jiucang; Zangwill, Linda M; Liebmann, Jeffrey; Girkin, Christopher; Jung, Tzyy-Ping; Weinreb, Robert N; Sample, Pamela A

    2009-12-01

    To determine if the patterns uncovered with variational Bayesian-independent component analysis-mixture model (VIM) applied to a large set of normal and glaucomatous fields obtained with the Swedish Interactive Thresholding Algorithm (SITA) are distinct, recognizable, and useful for modeling the severity of the field loss. SITA fields were obtained with the Humphrey Visual Field Analyzer (Carl Zeiss Meditec, Inc, Dublin, California) on 1,146 normal eyes and 939 glaucoma eyes from subjects followed by the Diagnostic Innovations in Glaucoma Study and the African Descent and Glaucoma Evaluation Study. VIM modifies independent component analysis (ICA) to develop separate sets of ICA axes in the cluster of normal fields and the 2 clusters of abnormal fields. Of 360 models, the model with the best separation of normal and glaucomatous fields was chosen for creating the maximally independent axes. Grayscale displays of fields generated by VIM on each axis were compared. SITA fields most closely associated with each axis and displayed in grayscale were evaluated for consistency of pattern at all severities. The best VIM model had 3 clusters. Cluster 1 (1,193) was mostly normal (1,089, 95% specificity) and had 2 axes. Cluster 2 (596) contained mildly abnormal fields (513) and 2 axes; cluster 3 (323) held mostly moderately to severely abnormal fields (322) and 5 axes. Sensitivity for clusters 2 and 3 combined was 88.9%. The VIM-generated field patterns differed from each other and resembled glaucomatous defects (eg, nasal step, arcuate, temporal wedge). SITA fields assigned to an axis resembled each other and the VIM-generated patterns for that axis. Pattern severity increased in the positive direction of each axis by expansion or deepening of the axis pattern. VIM worked well on SITA fields, separating them into distinctly different yet recognizable patterns of glaucomatous field defects. The axis and pattern properties make VIM a good candidate as a preliminary process for detecting progression.

  18. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  19. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  20. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  1. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  2. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  3. Effect of the addition of rocuronium to 2% lignocaine in peribulbar block for cataract surgery.

    PubMed

    Patil, Vishalakshi; Farooqy, Allauddin; Chaluvadi, Balaraju Thayappa; Rajashekhar, Vinayak; Malshetty, Ashwini

    2017-01-01

    Peribulbar anesthesia is associated with delayed orbital akinesia compared with retrobulbar anesthesia. To test the hypothesis that rocuronium added to a mixture of local anesthetics (LAs) could improve speed of onset of akinesia in peribulbar block (PB), we designed this study. This study examined the effects of adding rocuronium 5 mg to 2% lignocaine with adrenaline to note orbital and eyelid akinesia in patients undergoing cataract surgery. In a prospective, randomized, double-blind study, 100 patients were equally randomized to receive a mixture of 0.5 ml normal saline, 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group I), a mixture of rocuronium 0.5 ml (5 mg), 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group II). Orbital akinesia was assessed on a 0-8 score (0 = no movement, 8 = normal) at 2 min intervals for 10 min. Time to adequate anesthesia was also recorded. Results are presented as mean ± standard deviation. Rocuronium group demonstrated significantly better akinesia scores than control group at 2 min intervals post-PB (significant P value obtained). No significant complications were recorded. Rocuronium added to a mixture of LA improved the quality of akinesia in PB and reduced the need for supplementary injections. The addition of rocuronium 5 mg to a mixture of lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml shortened the onset time of peribulbar anesthesia in patients undergoing cataract surgery without causing adverse effects.

  4. Tunable integration of absorption-membrane-adsorption for efficiently separating low boiling gas mixtures near normal temperature

    PubMed Central

    Liu, Huang; Pan, Yong; Liu, Bei; Sun, Changyu; Guo, Ping; Gao, Xueteng; Yang, Lanying; Ma, Qinglan; Chen, Guangjin

    2016-01-01

    Separation of low boiling gas mixtures is widely concerned in process industries. Now their separations heavily rely upon energy-intensive cryogenic processes. Here, we report a pseudo-absorption process for separating low boiling gas mixtures near normal temperature. In this process, absorption-membrane-adsorption is integrated by suspending suitable porous ZIF material in suitable solvent and forming selectively permeable liquid membrane around ZIF particles. Green solvents like water and glycol were used to form ZIF-8 slurry and tune the permeability of liquid membrane surrounding ZIF-8 particles. We found glycol molecules form tighter membrane while water molecules form looser membrane because of the hydrophobicity of ZIF-8. When using mixing solvents composed of glycol and water, the permeability of liquid membrane becomes tunable. It is shown that ZIF-8/water slurry always manifests remarkable higher separation selectivity than solid ZIF-8 and it could be tuned to further enhance the capture of light hydrocarbons by adding suitable quantity of glycol to water. Because of its lower viscosity and higher sorption/desorption rate, tunable ZIF-8/water-glycol slurry could be readily used as liquid absorbent to separate different kinds of low boiling gas mixtures by applying a multistage separation process in one traditional absorption tower, especially for the capture of light hydrocarbons. PMID:26892255

  5. Temperature and pressure influence on maximum rates of pressure rise during explosions of propane-air mixtures in a spherical vessel.

    PubMed

    Razus, D; Brinzea, V; Mitu, M; Movileanu, C; Oancea, D

    2011-06-15

    The maximum rates of pressure rise during closed vessel explosions of propane-air mixtures are reported, for systems with various initial concentrations, pressures and temperatures ([C(3)H(8)]=2.50-6.20 vol.%, p(0)=0.3-1.3 bar; T(0)=298-423 K). Experiments were performed in a spherical vessel (Φ=10 cm) with central ignition. The deflagration (severity) index K(G), calculated from experimental values of maximum rates of pressure rise is examined against the adiabatic deflagration index, K(G, ad), computed from normal burning velocities and peak explosion pressures. At constant temperature and fuel/oxygen ratio, both the maximum rates of pressure rise and the deflagration indices are linear functions of total initial pressure, as reported for other fuel-air mixtures. At constant initial pressure and composition, the maximum rates of pressure rise and deflagration indices are slightly influenced by the initial temperature; some influence of the initial temperature on maximum rates of pressure rise is observed only for propane-air mixtures far from stoichiometric composition. The differentiated temperature influence on the normal burning velocities and the peak explosion pressures might explain this behaviour. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  7. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-01

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.

  8. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    PubMed

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  10. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  11. Substitution of the human αC region with the analogous chicken domain generates a fibrinogen with severely impaired lateral aggregation: fibrin monomers assemble into protofibrils but protofibrils do not assemble into fibers.†

    PubMed Central

    Ping, Lifang; Huang, Lihong; Cardinali, Barbara; Profumo, Aldo; Gorkun, Oleg V.; Lord, Susan T.

    2011-01-01

    Fibrin polymerization occurs in two steps: the assembly of fibrin monomers into protofibrils and the lateral aggregation of protofibrils into fibers. Here we describe a novel fibrinogen that apparently impairs only lateral aggregation. This variant is a hybrid, where the human αC region has been replaced with the homologous chicken region. Several experiments indicate this hybrid human-chicken (HC) fibrinogen has an overall structure similar to normal. Thrombin-catalyzed fibrinopeptide release from HC fibrinogen was normal. Plasmin digests of HC fibrinogen produced fragments that were similar to normal D and E; further, as with normal fibrinogen, the knob ‘A’ peptide, GPRP, reversed the plasmin cleavage associated with addition of EDTA. Dynamic light scattering and turbidity studies with HC fibrinogen showed polymerization was not normal. Whereas early small increases in hydrodynamic radius and absorbance paralleled the increases seen during the assembly of normal protofibrils, HC fibrinogen showed no dramatic increase in scattering as observed with normal lateral aggregation. To determine whether HC and normal fibrinogen could form a copolymer, we examined mixtures of these. Polymerization of normal fibrinogen was markedly changed by HC fibrinogen, as expected for mixed polymers. When the mixture contained 0.45 μM normal and 0.15 M HC fibrinogen, the initiation of lateral aggregation was delayed and the final fiber size was reduced relative to normal fibrinogen at 0.45 μM. Considered altogether our data suggest that HC fibrin monomers can assemble into protofibrils or protofibril-like structures but these either cannot assemble into fibers or assemble into very thin fibers. PMID:21932842

  12. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  13. Surface tensions of inorganic multicomponent aqueous electrolyte solutions and melts.

    PubMed

    Dutcher, Cari S; Wexler, Anthony S; Clegg, Simon L

    2010-11-25

    A semiempirical model is presented that predicts surface tensions (σ) of aqueous electrolyte solutions and their mixtures, for concentrations ranging from infinitely dilute solution to molten salt. The model requires, at most, only two temperature-dependent terms to represent surface tensions of either pure aqueous solutions, or aqueous or molten mixtures, over the entire composition range. A relationship was found for the coefficients of the equation σ = c(1) + c(2)T (where T (K) is temperature) for molten salts in terms of ion valency and radius, melting temperature, and salt molar volume. Hypothetical liquid surface tensions can thus be estimated for electrolytes for which there are no data, or which do not exist in molten form. Surface tensions of molten (single) salts, when extrapolated to normal temperatures, were found to be consistent with data for aqueous solutions. This allowed surface tensions of very concentrated, supersaturated, aqueous solutions to be estimated. The model has been applied to the following single electrolytes over the entire concentration range, using data for aqueous solutions over the temperature range 233-523 K, and extrapolated surface tensions of molten salts and pure liquid electrolytes: HCl, HNO(3), H(2)SO(4), NaCl, NaNO(3), Na(2)SO(4), NaHSO(4), Na(2)CO(3), NaHCO(3), NaOH, NH(4)Cl, NH(4)NO(3), (NH(4))(2)SO(4), NH(4)HCO(3), NH(4)OH, KCl, KNO(3), K(2)SO(4), K(2)CO(3), KHCO(3), KOH, CaCl(2), Ca(NO(3))(2), MgCl(2), Mg(NO(3))(2), and MgSO(4). The average absolute percentage error between calculated and experimental surface tensions is 0.80% (for 2389 data points). The model extrapolates smoothly to temperatures as low as 150 K. Also, the model successfully predicts surface tensions of ternary aqueous mixtures; the effect of salt-salt interactions in these calculations was explored.

  14. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  15. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  16. Clustering of time-course gene expression profiles using normal mixture models with autoregressive random effects

    PubMed Central

    2012-01-01

    Background Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models. Results We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases. Conclusions Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data. PMID:23151154

  17. Effects of Flame Structure and Hydrodynamics on Soot Particle Inception and Flame Extinction in Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Axelbaum, R. L.; Chen, R.; Sunderland, P. B.; Urban, D. L.; Liu, S.; Chao, B. H.

    2001-01-01

    This paper summarizes recent studies of the effects of stoichiometric mixture fraction (structure) and hydrodynamics on soot particle inception and flame extinction in diffusion flames. Microgravity experiments are uniquely suited for these studies because, unlike normal gravity experiments, they allow structural and hydrodynamic effects to be independently studied. As part of this recent flight definition program, microgravity studies have been performed in the 2.2 second drop tower. Normal gravity counterflow studies also have been employed and analytical and numerical models have been developed. A goal of this program is to develop sufficient understanding of the effects of flame structure that flames can be "designed" to specifications - consequently, the program name Flame Design. In other words, if a soot-free, strong, low temperature flame is required, can one produce such a flame by designing its structure? Certainly, as in any design, there will be constraints imposed by the properties of the available "materials." For hydrocarbon combustion, the base materials are fuel and air. Additives could be considered, but for this work only fuel, oxygen and nitrogen are considered. Also, the structure of these flames is "designed" by varying the stoichiometric mixture fraction. Following this line of reasoning, the studies described are aimed at developing the understanding of flame structure that is needed to allow for optimum design.

  18. Performance characterizations of asphalt binders and mixtures incorporating silane additive ZycoTherm

    NASA Astrophysics Data System (ADS)

    Hasan, Mohd Rosli Mohd; Hamzah, Meor Othman; Yee, Teh Sek

    2017-10-01

    Experimental works were conducted to evaluate the properties of asphalt binders and mixtures produced using a relatively new silane additive, named ZycoTherm. In this study, 0.1wt% ZycoTherm was blended with asphalt binder to enable production of asphalt mixture at lower than normal temperatures, as well as improve mix workability and compactability. Asphalt mixture performances towards pavement distresses in tropical climate region were also investigated. The properties of control asphalt binders (60/70 and 80/10 penetration grade) and asphalt binders incorporating 0.1% ZycoTherm were reported based on the penetration, softening point, rotational viscosity, complex modulus and phase angle. Subsequently, to compare the performance of asphalt mixture incorporating ZycoTherm with the control asphalt mixture, cylindrical samples were prepared at recommended temperatures and air voids depending on the binder types and test requirements. The samples were tested for indirect tensile strength (ITS), resilient modulus, dynamic creep, Hamburg wheel tracking and moisture induced damage. From compaction data using the Servopak gyratory compactor, specimen prepared using ZycoTherm exhibit higher workability and compactability compared to the conventional mixture. From the mixture performance test results, mixtures prepared with ZycoTherm showed comparable if not better performance than the control sample in terms of the resistance to moisture damage, permanent deformation and cracking.

  19. Statistical Modeling of Retinal Optical Coherence Tomography.

    PubMed

    Amini, Zahra; Rabbani, Hossein

    2016-06-01

    In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.

  20. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  1. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  2. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  3. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  4. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  5. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    NASA Astrophysics Data System (ADS)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  6. Velocity autocorrelation function in supercooled liquids: Long-time tails and anomalous shear-wave propagation.

    PubMed

    Peng, H L; Schober, H R; Voigtmann, Th

    2016-12-01

    Molecular dynamic simulations are performed to reveal the long-time behavior of the velocity autocorrelation function (VAF) by utilizing the finite-size effect in a Lennard-Jones binary mixture. Whereas in normal liquids the classical positive t^{-3/2} long-time tail is observed, we find in supercooled liquids a negative tail. It is strongly influenced by the transfer of the transverse current wave across the period boundary. The t^{-5/2} decay of the negative long-time tail is confirmed in the spectrum of VAF. Modeling the long-time transverse current within a generalized Maxwell model, we reproduce the negative long-time tail of the VAF, but with a slower algebraic t^{-2} decay.

  7. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  8. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  9. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  10. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  11. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  12. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  13. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Heterogeneous OH oxidation of motor oil particles causes selective depletion of branched and less cyclic hydrocarbons.

    PubMed

    Isaacman, Gabriel; Chan, Arthur W H; Nah, Theodora; Worton, David R; Ruehl, Chris R; Wilson, Kevin R; Goldstein, Allen H

    2012-10-02

    Motor oil serves as a useful model system for atmospheric oxidation of hydrocarbon mixtures typical of anthropogenic atmospheric particulate matter, but its complexity often prevents comprehensive chemical speciation. In this work we fully characterize this formerly "unresolved complex mixture" at the molecular level using recently developed soft ionization gas chromatography techniques. Nucleated motor oil particles are oxidized in a flow tube reactor to investigate the relative reaction rates of observed hydrocarbon classes: alkanes, cycloalkanes, bicycloalkanes, tricycloalkanes, and steranes. Oxidation of hydrocarbons in a complex aerosol is found to be efficient, with approximately three-quarters (0.72 ± 0.06) of OH collisions yielding a reaction. Reaction rates of individual hydrocarbons are structurally dependent: compared to normal alkanes, reaction rates increased by 20-50% with branching, while rates decreased ∼20% per nonaromatic ring present. These differences in rates are expected to alter particle composition as a function of oxidation, with depletion of branched and enrichment of cyclic hydrocarbons. Due to this expected shift toward ring-opening reactions heterogeneous oxidation of the unreacted hydrocarbon mixture is less likely to proceed through fragmentation pathways in more oxidized particles. Based on the observed oxidation-induced changes in composition, isomer-resolved analysis has potential utility for determining the photochemical age of atmospheric particulate matter with respect to heterogeneous oxidation.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  17. Evaporation characteristics of ETBE-blended gasoline.

    PubMed

    Okamoto, Katsuhiro; Hiramatsu, Muneyuki; Hino, Tomonori; Otake, Takuma; Okamoto, Takashi; Miyamoto, Hiroki; Honma, Masakatsu; Watanabe, Norimichi

    2015-04-28

    To reduce greenhouse gas emissions, which contribute to global warming, production of gasoline blended with ethyl tert-buthyl ether (ETBE) is increasing annually. The flash point of ETBE is higher than that of gasoline, and blending ETBE into gasoline will change the flash point and the vapor pressure. Therefore, it is expected that the fire hazard caused by ETBE-blended gasoline would differ from that caused by normal gasoline. The aim of this study was to acquire the knowledge required for estimating the fire hazard of ETBE-blended gasoline. Supposing that ETBE-blended gasoline was a two-component mixture of gasoline and ETBE, we developed a prediction model that describes the vapor pressure and flash point of ETBE-blended gasoline in an arbitrary ETBE blending ratio. We chose 8-component hydrocarbon mixture as a model gasoline, and defined the relation between molar mass of gasoline and mass loss fraction. We measured the changes in the vapor pressure and flash point of gasoline by blending ETBE and evaporation, and compared the predicted values with the measured values in order to verify the prediction model. The calculated values of vapor pressures and flash points corresponded well to the measured values. Thus, we confirmed that the change in the evaporation characteristics of ETBE-blended gasoline by evaporation could be predicted by the proposed model. Furthermore, the vapor pressure constants of ETBE-blended gasoline were obtained by the model, and then the distillation curves were developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  19. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  20. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  1. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  2. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  3. Concordance measure and discriminatory accuracy in transformation cure models.

    PubMed

    Zhang, Yilong; Shao, Yongzhao

    2018-01-01

    Many populations of early-stage cancer patients have non-negligible latent cure fractions that can be modeled using transformation cure models. However, there is a lack of statistical metrics to evaluate prognostic utility of biomarkers in this context due to the challenges associated with unknown cure status and heavy censorship. In this article, we develop general concordance measures as evaluation metrics for the discriminatory accuracy of transformation cure models including the so-called promotion time cure models and mixture cure models. We introduce explicit formulas for the consistent estimates of the concordance measures, and show that their asymptotically normal distributions do not depend on the unknown censoring distribution. The estimates work for both parametric and semiparametric transformation models as well as transformation cure models. Numerical feasibility of the estimates and their robustness to the censoring distributions are illustrated via simulation studies and demonstrated using a melanoma data set. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  5. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Self-healing properties of recycled asphalt mixtures containing metal waste: An approach through microwave radiation heating.

    PubMed

    González, A; Norambuena-Contreras, J; Storey, L; Schlangen, E

    2018-05-15

    The concept of self-healing asphalt mixtures by bitumen temperature increase has been used by researchers to create an asphalt mixture with crack-healing properties by microwave or induction heating. Metals, normally steel wool fibers (SWF), are added to asphalt mixtures prepared with virgin materials to absorb and conduct thermal energy. Metal shavings, a waste material from the metal industry, could be used to replace SWF. In addition, reclaimed asphalt pavement (RAP) could be added to these mixtures to make a more sustainable road material. This research aimed to evaluate the effect of adding metal shavings and RAP on the properties of asphalt mixtures with crack-healing capabilities by microwave heating. The research indicates that metal shavings have an irregular shape with widths larger than typical SWF used with asphalt self-healing purposes. The general effect of adding metal shavings was an improvement in the crack-healing of asphalt mixtures, while adding RAP to mixtures with metal shavings reduced the healing. The average surface temperature of the asphalt samples after microwave heating was higher than temperatures obtained by induction heating, indicating that shavings are more efficient when mixtures are heated by microwave radiation. CT scan analysis showed that shavings uniformly distribute in the mixture, and the addition of metal shavings increases the air voids. Overall, it is concluded that asphalt mixtures with RAP and waste metal shavings have the potential of being crack-healed by microwave heating. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Hydrogen isotope separation utilizing bulk getters

    DOEpatents

    Knize, R.J.; Cecchi, J.L.

    1991-08-20

    Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen. 4 figures.

  8. Hydrogen isotope separation utilizing bulk getters

    DOEpatents

    Knize, Randall J.; Cecchi, Joseph L.

    1991-01-01

    Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.

  9. Hydrogen isotope separation utilizing bulk getters

    DOEpatents

    Knize, Randall J.; Cecchi, Joseph L.

    1990-01-01

    Tritium and deuterium are separated from a gaseous mixture thereof, derived from a nuclear fusion reactor or some other source, by providing a casing with a bulk getter therein for absorbing the gaseous mixture to produce an initial loading of the getter, partially desorbing the getter to produce a desorbed mixture which is tritium-enriched, pumping the desorbed mixture into a separate container, the remaining gaseous loading in the getter being deuterium-enriched, desorbing the getter to a substantially greater extent to produce a deuterium-enriched gaseous mixture, and removing the deuterium-enriched mixture into another container. The bulk getter may comprise a zirconium-aluminum alloy, or a zirconium-vanadium-iron alloy. The partial desorption may reduce the loading by approximately fifty percent. The basic procedure may be extended to produce a multistage isotope separator, including at least one additional bulk getter into which the tritium-enriched mixture is absorbed. The second getter is then partially desorbed to produce a desorbed mixture which is further tritium-enriched. The last-mentioned mixture is then removed from the container for the second getter, which is then desorbed to a substantially greater extent to produce a desorbed mixture which is deuterium-enriched. The last-mentioned mixture is then removed so that the cycle can be continued and repeated. The method of isotope separation is also applicable to other hydrogen isotopes, in that the method can be employed for separating either deuterium or tritium from normal hydrogen.

  10. An integrative approach to assess X-chromosome inactivation using allele-specific expression with applications to epithelial ovarian cancer.

    PubMed

    Larson, Nicholas B; Fogarty, Zachary C; Larson, Melissa C; Kalli, Kimberly R; Lawrenson, Kate; Gayther, Simon; Fridley, Brooke L; Goode, Ellen L; Winham, Stacey J

    2017-12-01

    X-chromosome inactivation (XCI) epigenetically silences transcription of an X chromosome in females; patterns of XCI are thought to be aberrant in women's cancers, but are understudied due to statistical challenges. We develop a two-stage statistical framework to assess skewed XCI and evaluate gene-level patterns of XCI for an individual sample by integration of RNA sequence, copy number alteration, and genotype data. Our method relies on allele-specific expression (ASE) to directly measure XCI and does not rely on male samples or paired normal tissue for comparison. We model ASE using a two-component mixture of beta distributions, allowing estimation for a given sample of the degree of skewness (based on a composite likelihood ratio test) and the posterior probability that a given gene escapes XCI (using a Bayesian beta-binomial mixture model). To illustrate the utility of our approach, we applied these methods to data from tumors of ovarian cancer patients. Among 99 patients, 45 tumors were informative for analysis and showed evidence of XCI skewed toward a particular parental chromosome. For 397 X-linked genes, we observed tumor XCI patterns largely consistent with previously identified consensus states based on multiple normal tissue types. However, 37 genes differed in XCI state between ovarian tumors and the consensus state; 17 genes aberrantly escaped XCI in ovarian tumors (including many oncogenes), whereas 20 genes were unexpectedly inactivated in ovarian tumors (including many tumor suppressor genes). These results provide evidence of the importance of XCI in ovarian cancer and demonstrate the utility of our two-stage analysis. © 2017 WILEY PERIODICALS, INC.

  11. Trajectories of depressive and anxiety symptoms in older adults: a 6-year prospective cohort study.

    PubMed

    Holmes, Sophie E; Esterlis, Irina; Mazure, Carolyn M; Lim, Yen Ying; Ames, David; Rainey-Smith, Stephanie; Fowler, Chris; Ellis, Kathryn; Martins, Ralph N; Salvado, Olivier; Doré, Vincent; Villemagne, Victor L; Rowe, Christopher C; Laws, Simon M; Masters, Colin L; Pietrzak, Robert H; Maruff, Paul

    2018-02-01

    Depressive and anxiety symptoms are common in older adults, significantly affect quality of life, and are risk factors for Alzheimer's disease. We sought to identify the determinants of predominant trajectories of depressive and anxiety symptoms in cognitively normal older adults. Four hundred twenty-three older adults recruited from the general community underwent Aβ positron emission tomography imaging, apolipoprotein and brain-derived neurotrophic factor genotyping, and cognitive testing at baseline and had follow-up assessments. All participants were cognitively normal and free of clinical depression at baseline. Latent growth mixture modeling was used to identify predominant trajectories of subthreshold depressive and anxiety symptoms over 6 years. Binary logistic regression analysis was used to identify baseline predictors of symptomatic depressive and anxiety trajectories. Latent growth mixture modeling revealed two predominant trajectories of depressive and anxiety symptoms: a chronically elevated trajectory and a low, stable symptom trajectory, with almost one in five participants falling into the elevated trajectory groups. Male sex (relative risk ratio (RRR) = 3.23), lower attentional function (RRR = 1.90), and carriage of the brain-derived neurotrophic factor Val66Met allele in women (RRR = 2.70) were associated with increased risk for chronically elevated depressive symptom trajectory. Carriage of the apolipoprotein epsilon 4 allele (RRR = 1.92) and lower executive function in women (RRR = 1.74) were associated with chronically elevated anxiety symptom trajectory. Our results indicate distinct and sex-specific risk factors linked to depressive and anxiety trajectories, which may help inform risk stratification and management of these symptoms in older adults at risk for Alzheimer's disease. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. The intermediates take it all: asymptotics of higher criticism statistics and a powerful alternative based on equal local levels.

    PubMed

    Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut

    2015-01-01

    The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  14. A Voltammetric Electronic Tongue for the Resolution of Ternary Nitrophenol Mixtures

    PubMed Central

    González-Calabuig, Andreu; Cetó, Xavier

    2018-01-01

    This work reports the applicability of a voltammetric sensor array able to quantify the content of 2,4-dinitrophenol, 4-nitrophenol, and picric acid in artificial samples using the electronic tongue (ET) principles. The ET is based on cyclic voltammetry signals, obtained from an array of metal disk electrodes and a graphite epoxy composite electrode, compressed using discrete wavelet transform with chemometric tools such as artificial neural networks (ANNs). ANNs were employed to build the quantitative prediction model. In this manner, a set of standards based on a full factorial design, ranging from 0 to 300 mg·L−1, was prepared to build the model; afterward, the model was validated with a completely independent set of standards. The model successfully predicted the concentration of the three considered phenols with a normalized root mean square error of 0.030 and 0.076 for the training and test subsets, respectively, and r ≥ 0.948. PMID:29342848

  15. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  16. The effect of crystal shape, size and bimodality on the maximum packing and the rheology of crystal bearing magma

    NASA Astrophysics Data System (ADS)

    Moitra, Pranabendu; Gonnermann, Helge

    2014-05-01

    Magma often contains crystals of various shapes and sizes. We present experimental results on the effect of the shape- and size-distribution of solid particles on the rheological properties of solid-liquid suspensions, which are hydrodynamically analogous to crystal-bearing magmas. The suspensions were comprised of either a single particle shape and size (unimodal) or a mixture of two different particle shapes and sizes (bimodal). For each type of suspension we characterized the dry maximum packing fraction of the particle mixture using the tap density method. We then systematically varied the total volume fraction of particles in the suspension, as well as the relative proportion of the two different particle types in the bimodal suspensions. For each of the resultant mixtures (suspensions) we performed controlled shear stress experiments using a rotational rheometer in parallel-plate geometry spanning 4 orders of magnitude in shear stress. The resultant data curves of shear stress as a function of shear rate were fitted using a Herschel-Bulkley rheological model. We find that the dry maximum packing decreases with increasing particle aspect ratio (ar) and decreasing particle size ratio (Λ). The highest dry maximum packing was obtained at 60-75% volume of larger particles for bimodal spherical particle mixture. Normalized consistency, Kr, defined as the ratio of the consistency of the suspension and the viscosity of the suspending liquid, was fitted using a Krieger-Dougherty model as a function of the total solid volume fraction (φ). The maximum packing fractions (φm) obtained from the shear experimental data fitting of the unimodal suspensions were similar in magnitude with the dry maximum packing fractions of the unimodal particles. Subsequently, we used the dry maximum packing fractions of the bimodal particle mixtures to fit Kr as a function of φ for the bimodal suspensions. We find that Kr increases rapidly for suspensions with larger ar and smaller Λ. We also find that both the apparent yield stress and the shear thinning behavior of the suspensions increase with increasing ar and become significant at φ/φm ≥ 0.4.

  17. Method for removing sulfur oxide from waste gases and recovering elemental sulfur

    DOEpatents

    Moore, Raymond H.

    1977-01-01

    A continuous catalytic fused salt extraction process is described for removing sulfur oxides from gaseous streams. The gaseous stream is contacted with a molten potassium sulfate salt mixture having a dissolved catalyst to oxidize sulfur dioxide to sulfur trioxide and molten potassium normal sulfate to solvate the sulfur trioxide to remove the sulfur trioxide from the gaseous stream. A portion of the sulfur trioxide loaded salt mixture is then dissociated to produce sulfur trioxide gas and thereby regenerate potassium normal sulfate. The evolved sulfur trioxide is reacted with hydrogen sulfide as in a Claus reactor to produce elemental sulfur. The process may be advantageously used to clean waste stack gas from industrial plants, such as copper smelters, where a supply of hydrogen sulfide is readily available.

  18. Off-line real-time FTIR analysis of a process step in imipenem production

    NASA Astrophysics Data System (ADS)

    Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.

    1992-08-01

    We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.

  19. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  20. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  1. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  2. Easy-interactive and quick psoriasis lesion segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang

    2013-12-01

    This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.

  3. Spin Imbalanced Quasi-Two-Dimensional Fermi Gases

    NASA Astrophysics Data System (ADS)

    Ong, Willie C.

    Spin-imbalanced Fermi gases serve as a testbed for fundamental notions and are efficient table-top emulators of a variety of quantum matter ranging from neutron stars, the quark-gluon plasma, to high critical temperature superconductors. A macroscopic quantum phenomenon which occurs in spin-imbalanced Fermi gases is that of phase separation; in three dimensions, a spin-balanced, fully-paired superfluid core is surrounded by an imbalanced normal-fluid shell, followed by a fully polarized shell. In one dimension, the behavior is reversed; a balanced phase appears outside a spin-imbalanced core. This thesis details the first density profile measurements and studies on spin-imbalanced quasi-2D Fermi gases, accomplished with high-resolution, rapid sequential spin-imaging. The measured cloud radii and central densities are in disagreement with mean-field Bardeen-Cooper-Schrieffer theory for a 2D system. Data for normal-fluid mixtures are well fit by a simple 2D polaron model of the free energy. Not predicted by the model is an observed phase transition to a spin-balanced central core above a critical polarisation.

  4. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  5. Wavelength and energy dependent absorption of unconventional fuel mixtures

    NASA Astrophysics Data System (ADS)

    Khan, N.; Saleem, Z.; Mirza, A. A.

    2005-11-01

    Economic considerations of laser induced ignition over the normal electrical ignition of direct injected Compressed Natural Gas (CNG) engines has motivated automobile industry to go for extensive research on basic characteristics of leaner unconventional fuel mixtures to evaluate practical possibility of switching over to the emerging technologies. This paper briefly reviews the ongoing research activities on minimum ignition energy and power requirements of natural gas fuels and reports results of present laser air/CNG mixture absorption coefficient study. This study was arranged to determine the thermo-optical characteristics of high air/fuel ratio mixtures using laser techniques. We measured the absorption coefficient using four lasers of multiple wavelengths over a wide range of temperatures and pressures. The absorption coefficient of mixture was found to vary significantly over change of mixture temperature and probe laser wavelengths. The absorption coefficients of air/CNG mixtures were measured using 20 watts CW/pulsed CO2 laser at 10.6μm, Pulsed Nd:Yag laser at 1.06μm, 532 nm (2nd harmonic) and 4 mW CW HeNe laser at 645 nm and 580 nm for temperatures varying from 290 to 1000K using optical transmission loss technique.

  6. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  7. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  8. 49 CFR 173.313 - UN Portable Tank Table for Liquefied Compressed Gases.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Normal 0.51 7.0 7.0 7.0 1012 Butylene 8.0 Allowed Normal 0.53 7.0 7.0 7.0 1017 Chlorine 19.0 Not § 178... tanks— Not Allowed § 178.276(e)(3) 0.78 1041 Ethylene oxide and carbon dioxide mixture with more than 9...(a) Allowed Normal See § 173.32(f) 1079 Sulphur dioxide 11.6 Not Allowed § 178.276(e)(3) 1.23 10.3 8...

  9. 49 CFR 173.313 - UN Portable Tank Table for Liquefied Compressed Gases.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Normal 0.51 7.0 7.0 7.0 1012 Butylene 8.0 Allowed Normal 0.53 7.0 7.0 7.0 1017 Chlorine 19.0 Not § 178... tanks— Not Allowed § 178.276(e)(3) 0.78 1041 Ethylene oxide and carbon dioxide mixture with more than 9...(a) Allowed Normal See § 173.32(f) 1079 Sulphur dioxide 11.6 Not Allowed § 178.276(e)(3) 1.23 10.3 8...

  10. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  11. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  12. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  13. Linear modeling of the soil-water partition coefficient normalized to organic carbon content by reversed-phase thin-layer chromatography.

    PubMed

    Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka

    2016-08-05

    Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Odourant dominance in olfactory mixture processing: what makes a strong odourant?

    PubMed Central

    Schubert, Marco; Sandoz, Jean-Christophe; Galizia, Giovanni; Giurfa, Martin

    2015-01-01

    The question of how animals process stimulus mixtures remains controversial as opposing views propose that mixtures are processed analytically, as the sum of their elements, or holistically, as unique entities different from their elements. Overshadowing is a widespread phenomenon that can help decide between these alternatives. In overshadowing, an individual trained with a binary mixture learns one element better at the expense of the other. Although element salience (learning success) has been suggested as a main explanation for overshadowing, the mechanisms underlying this phenomenon remain unclear. We studied olfactory overshadowing in honeybees to uncover the mechanisms underlying olfactory-mixture processing. We provide, to our knowledge, the most comprehensive dataset on overshadowing to date based on 90 experimental groups involving more than 2700 bees trained either with six odourants or with their resulting 15 binary mixtures. We found that bees process olfactory mixtures analytically and that salience alone cannot predict overshadowing. After normalizing learning success, we found that an unexpected feature, the generalization profile of an odourant, was determinant for overshadowing. Odourants that induced less generalization enhanced their distinctiveness and became dominant in the mixture. Our study thus uncovers features that determine odourant dominance within olfactory mixtures and allows the referring of this phenomenon to differences in neural activity both at the receptor and the central level in the insect nervous system. PMID:25652840

  15. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  16. Cure models for estimating hospital-based breast cancer survival.

    PubMed

    Rama, Ranganathan; Swaminathan, Rajaraman; Venkatesan, Perumal

    2010-01-01

    Research on cancer survival is enriched by development and application of innovative analytical approaches in relation to standard methods. The aim of the present paper is to document the utility of a mixture model to estimate the cure fraction and compare it with other approaches. The data were for 1,107 patients with locally advanced breast cancer, who completed the neo-adjuvant treatment protocol during 1990-99 at the Cancer Institute (WIA), Chennai, India. Tumour stage, post-operative pathological node (PN) and tumour residue (TR) status were studied. Event free survival probability was estimated using the Kaplan-Meier method. Cure models under proportional and non-proportional hazard assumptions following log normal distribution for survival time were used to estimate both the cure fraction and the survival function for the uncured. Event free survival at 5 and 10 years were 64.2% and 52.6% respectively and cure fraction was 47.5% for all cases together. Follow up ranged between 0-15 years and survival probabilities showed minimal changes after 7 years of follow up. TR and PN emerged as independent prognostic factors using Cox and proportional hazard (PH) cure models. Proportionality condition was violated when tumour stage was considered and it was statistically significant only under PH and not under non PH cure models. However, TR and PN continued to be independent prognostic factors after adjusting for tumour stage using the non PH cure model. A consistent ordering of cure fractions with respect to factors of PN and TR was forthcoming across tumour stages using PH and non PH cure models, but perceptible differences in survival were observed between the two. If PH conditions are violated, analysis using a non PH model is advocated and mixture cure models are useful in estimating the cure fraction and constructing survival curves for non-cures.

  17. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  18. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  19. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  20. Bayesian analysis and classification of two Enzyme-Linked Immunosorbent Assay (ELISA) tests without a gold standard

    PubMed Central

    Zhang, Jingyang; Chaloner, Kathryn; McLinden, James H.; Stapleton, Jack T.

    2013-01-01

    Reconciling two quantitative ELISA tests for an antibody to an RNA virus, in a situation without a gold standard and where false negatives may occur, is the motivation for this work. False negatives occur when access of the antibody to the binding site is blocked. Based on the mechanism of the assay, a mixture of four bivariate normal distributions is proposed with the mixture probabilities depending on a two-stage latent variable model including the prevalence of the antibody in the population and the probabilities of blocking on each test. There is prior information on the prevalence of the antibody, and also on the probability of false negatives, and so a Bayesian analysis is used. The dependence between the two tests is modeled to be consistent with the biological mechanism. Bayesian decision theory is utilized for classification. The proposed method is applied to the motivating data set to classify the data into two groups: those with and those without the antibody. Simulation studies describe the properties of the estimation and the classification. Sensitivity to the choice of the prior distribution is also addressed by simulation. The same model with two levels of latent variables is applicable in other testing procedures such as quantitative polymerase chain reaction tests where false negatives occur when there is a mutation in the primer sequence. PMID:23592433

  1. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  2. Domain wall suppression in trapped mixtures of Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2012-08-01

    The ground-state energy of a binary mixture of Bose-Einstein condensates can be estimated for large atomic samples by making use of suitably regularized Thomas-Fermi density profiles. By exploiting a variational method on the trial densities the energy can be computed by explicitly taking into account the normalization condition. This yields analytical results and provides the basis for further improvement of the approximation. As a case study, we consider a binary mixture of 87Rb atoms in two different hyperfine states in a double-well potential and discuss the energy crossing between density profiles with different numbers of domain walls, as the number of particles and the interspecies interaction vary.

  3. Single-Particle Properties of a Strongly Interacting Bose-Fermi Mixture Above the BEC Phase Transition Temperature

    NASA Astrophysics Data System (ADS)

    Kharga, D.; Inotani, D.; Hanai, R.; Ohashi, Y.

    2017-06-01

    We theoretically investigate the normal state properties of a Bose-Fermi mixture with a strong attractive interaction between Fermi and Bose atoms. We extend the ordinary T-matrix approximation (TMA) with respect to Bose-Fermi pairing fluctuations, to include the Hugenholtz-Pines' relation for all Bose Green's functions appearing in TMA self-energy diagrams. This extension is shown to be essentially important to correctly describe the physical properties of the Bose-Fermi mixture, especially near the Bose-Einstein condensation instability. Using this improved TMA, we clarify how the formation of composite fermions affects Bose and Fermi single-particle excitation spectra, over the entire interaction strength.

  4. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  5. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  6. CRAFT (complete reduction to amplitude frequency table)--robust and time-efficient Bayesian approach for quantitative mixture analysis by NMR.

    PubMed

    Krishnamurthy, Krish

    2013-12-01

    The intrinsic quantitative nature of NMR is increasingly exploited in areas ranging from complex mixture analysis (as in metabolomics and reaction monitoring) to quality assurance/control. Complex NMR spectra are more common than not, and therefore, extraction of quantitative information generally involves significant prior knowledge and/or operator interaction to characterize resonances of interest. Moreover, in most NMR-based metabolomic experiments, the signals from metabolites are normally present as a mixture of overlapping resonances, making quantification difficult. Time-domain Bayesian approaches have been reported to be better than conventional frequency-domain analysis at identifying subtle changes in signal amplitude. We discuss an approach that exploits Bayesian analysis to achieve a complete reduction to amplitude frequency table (CRAFT) in an automated and time-efficient fashion - thus converting the time-domain FID to a frequency-amplitude table. CRAFT uses a two-step approach to FID analysis. First, the FID is digitally filtered and downsampled to several sub FIDs, and secondly, these sub FIDs are then modeled as sums of decaying sinusoids using the Bayesian approach. CRAFT tables can be used for further data mining of quantitative information using fingerprint chemical shifts of compounds of interest and/or statistical analysis of modulation of chemical quantity in a biological study (metabolomics) or process study (reaction monitoring) or quality assurance/control. The basic principles behind this approach as well as results to evaluate the effectiveness of this approach in mixture analysis are presented. Copyright © 2013 John Wiley & Sons, Ltd.

  7. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  8. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  9. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. 4-D segmentation and normalization of 3He MR images for intrasubject assessment of ventilated lung volumes

    NASA Astrophysics Data System (ADS)

    Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.

    2012-03-01

    Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.

  12. Polaron Thermodynamics of Spin-Imbalanced Quasi-Two-Dimensional Fermi Gases

    NASA Astrophysics Data System (ADS)

    Ong, Willie; Cheng, Chingyun; Arakelyan, Ilya; Thomas, John

    2015-05-01

    We present the first spatial profile measurements for spin-imbalanced mixtures of atomic 6Li fermions in a quasi-2D geometry with tunable strong interactions. The observed minority and majority profiles are not correctly predicted by BCS theory for a true 2D system, but are reasonably well fit by a 2D-polaron model of the free energy. Density difference profiles reveal a flat center with two peaks at the edges, consistent with a fully paired core of the corresponding 2D density profiles. These features are more prominent for higher interaction strengths. Not predicted by the polaron model is an observed transition from a spin-imbalanced normal fluid phase to a spin-balanced central core above a critical imbalance. Supported by ARO, DOE, AFOSR, NSF.

  13. A Computational Investigation of Sooting Limits of Spherical Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Lecoustre, V. R.; Chao, B. H.; Sunderland, P. B.; Urban, D. L.; Stocker, D. P.; Axelbaum, R. L.

    2007-01-01

    Limiting conditions for soot particle inception in spherical diffusion flames were investigated numerically. The flames were modeled using a one-dimensional, time accurate diffusion flame code with detailed chemistry and transport and an optically thick radiation model. Seventeen normal and inverse flames were considered, covering a wide range of stoichiometric mixture fraction, adiabatic flame temperature, and residence time. These flames were previously observed to reach their sooting limits after 2 s of microgravity. Sooting-limit diffusion flames with residence times longer than 200 ms were found to have temperatures near 1190 K where C/O = 0.6, whereas flames with shorter residence times required increased temperatures. Acetylene was found to be a reasonable surrogate for soot precursor species in these flames, having peak mole fractions of about 0.01.

  14. Effects of C/O Ratio and Temperature on Sooting Limits of Spherical Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Lecoustre, V. R.; Sunderland, P. B.; Chao, B. H.; Urban, D. L.; Stocker, D. P.; Axelbaum, R. L.

    2008-01-01

    Limiting conditions for soot particle inception in spherical diffusion flames were investigated numerically. The flames were modeled using a one-dimensional, time accurate diffusion flame code with detailed chemistry and transport and an optically thick radiation model. Seventeen normal and inverse flames were considered, covering a wide range of stoichiometric mixture fraction, adiabatic flame temperature, residence time and scalar dissipation rate. These flames were previously observed to reach their sooting limits after 2 s of microgravity. Sooting-limit diffusion flames with scalar dissipation rate lower than 2/s were found to have temperatures near 1400 K where C/O = 0.51, whereas flames with greater scalar dissipation rate required increased temperatures. This finding was valid across a broad range of fuel and oxidizer compositions and convection directions.

  15. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  16. A new model of strabismic amblyopia: Loss of spatial acuity due to increased temporal dispersion of geniculate X-cell afferents on to cortical neurons.

    PubMed

    Crewther, D P; Crewther, S G

    2015-09-01

    Although the neural locus of strabismic amblyopia has been shown to lie at the first site of binocular integration, first in cat and then in primate, an adequate mechanism is still lacking. Here we hypothesise that increased temporal dispersion of LGN X-cell afferents driven by the deviating eye onto single cortical neurons may provide a neural mechanism for strabismic amblyopia. This idea was investigated via single cell extracellular recordings of 93 X and 50 Y type LGN neurons from strabismic and normal cats. Both X and Y neurons driven by the non-deviating eye showed shorter latencies than those driven by either the strabismic or normal eyes. Also the mean latency difference between X and Y neurons was much greater for the strabismic cells compared with the other two groups. The incidence of lagged X-cells driven by the deviating eye of the strabismic cats was higher than that of LGN X-cells from normal animals. Remarkably, none of the cells recorded from the laminae driven by the non-deviating eye were of the lagged class. A simple computational model was constructed in which a mixture of lagged and non-lagged afferents converge on to single cortical neurons. Model cut-off spatial frequencies to a moving grating stimulus were sensitive to the temporal dispersion of the geniculate afferents. Thus strabismic amblyopia could be viewed as a lack of developmental tuning of geniculate lags for neurons driven by the amblyopic eye. Monocular control of fixation by the non-deviating eye is associated with reduced incidence of lagged neurons, suggesting that in normal vision, lagged neurons might play a role in maintaining binocular connections for cortical neurons. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  18. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  19. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  20. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  1. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  2. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  3. Determination of the main solid-state form of albendazole in bulk drug, employing Raman spectroscopy coupled to multivariate analysis.

    PubMed

    Calvo, Natalia L; Arias, Juan M; Altabef, Aída Ben; Maggio, Rubén M; Kaufman, Teodoro S

    2016-09-10

    Albendazole (ALB) is a broad-spectrum anthelmintic, which exhibits two solid-state forms (Forms I and II). The Form I is the metastable crystal at room temperature, while Form II is the stable one. Because the drug has poor aqueous solubility and Form II is less soluble than Form I, it is desirable to have a method to assess the solid-state form of the drug employed for manufacturing purposes. Therefore, a Partial Least Squares (PLS) model was developed for the determination of Form I of ALB in its mixtures with Form II. For model development, both solid-state forms of ALB were prepared and characterized by microscopic (optical and with normal and polarized light), thermal (DSC) and spectroscopic (ATR-FTIR, Raman) techniques. Mixtures of solids in different ratios were prepared by weighing and mechanical mixing of the components. Their Raman spectra were acquired, and subjected to peak smoothing, normalization, standard normal variate correction and de-trending, before performing the PLS calculations. The optimal spectral region (1396-1280cm(-1)) and number of latent variables (LV=3) were obtained employing a moving window of variable size strategy. The method was internally validated by means of the leave one out procedure, providing satisfactory statistics (r(2)=0.9729 and RMSD=5.6%) and figures of merit (LOD=9.4% and MDDC=1.4). Furthermore, the method's performance was also evaluated by analysis of two validation sets. Validation set I was used for assessment of linearity and range and Validation set II, to demonstrate accuracy and precision (Recovery=101.4% and RSD=2.8%). Additionally, a third set of spiked commercial samples was evaluated, exhibiting excellent recoveries (94.2±6.4%). The results suggest that the combination of Raman spectroscopy with multivariate analysis could be applied to the assessment of the main crystal form and its quantitation in samples of ALB bulk drug, in the routine quality control laboratory. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  5. Differential chemosensory feeding behaviour by three co-occurring mysids (Crustacea, Mysidacea) from southeastern Tasmania.

    PubMed

    Metillo, Ephrime B; Ritz, David A

    2003-02-01

    Three mysid species showed differences in chemosensory feeding as judged from stereotyped food capturing responses to dissolved mixtures of feeding stimulant (either betaine-HCl or glycine) and suppressant (ammonium). The strongest responses were to 50:50 mixtures of both betaine-ammonium and glycine-ammonium solutions. In general, the response curve to the different mixtures tested was bell-shaped. Anisomysis mixta australis only showed the normal curve in response to the glycine-ammonium mixture. The platykurtic curve for Tenagomysis tasmaniae suggests a less optimal response to the betaine-HCl-ammonium solution. Paramesopodopsis rufa reacted more strongly to the betaine-ammonium than to the glycine-ammonium solutions, and more individuals of this species responded to both solutions than the other two species. It is suggested that these contrasting chemosensitivities of the three coexisting mysid species serve as a means of partitioning the feeding niche.

  6. Microwave Determination of Water Mole Fraction in Humid Gas Mixtures

    NASA Astrophysics Data System (ADS)

    Cuccaro, R.; Gavioso, R. M.; Benedetto, G.; Madonna Ripa, D.; Fernicola, V.; Guianvarc'h, C.

    2012-09-01

    A small volume (65 cm3) gold-plated quasi-spherical microwave resonator has been used to measure the water vapor mole fraction x w of H2O/N2 and H2O/air mixtures. This experimental technique exploits the high precision achievable in the determination of the cavity microwave resonance frequencies and is particularly sensitive to the presence of small concentrations of water vapor as a result of the high polarizability of this substance. The mixtures were prepared using the INRIM standard humidity generator for frost-point temperatures T fp in the range between 241 K and 270 K and a commercial two-pressure humidity generator operated at a dew-point temperature between 272 K and 291 K. The experimental measurements compare favorably with the calculated molar fractions of the mixture supplied by the humidity generators, showing a normalized error lower than 0.8.

  7. Evaluation of a linear spectral mixture model and vegetation indices (NDVI and EVI) in a study of schistosomiasis mansoni and Biomphalaria glabrata distribution in the state of Minas Gerais, Brazil.

    PubMed

    Guimarães, Ricardo J P S; Freitas, Corina C; Dutra, Luciano V; Scholte, Ronaldo G C; Amaral, Ronaldo S; Drummond, Sandra C; Shimabukuro, Yosio E; Oliveira, Guilherme C; Carvalho, Omar S

    2010-07-01

    This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.

  8. Numerical Prediction of Radiation Measurements Taken in the X2 Facility for Mars and Titan Gas Mixtures

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Brandis, Aaron; McIntyre, Timothy J.

    2011-01-01

    Thermochemical relaxation behind a normal shock in Mars and Titan gas mixtures is simulated using a CFD solver, DPLR, for a hemisphere of 1 m radius; the thermochemical relaxation along the stagnation streamline is considered equivalent to the flow behind a normal shock. Flow simulations are performed for a Titan gas mixture (98% N2, 2% CH4 by volume) for shock speeds of 5.7 and 7.6 km/s and pressures ranging from 20 to 1000 Pa, and a Mars gas mixture (96% CO2, and 4% N2 by volume) for a shock speed of 8.6 km/s and freestream pressure of 13 Pa. For each case, the temperatures and number densities of chemical species obtained from the CFD flow predictions are used as an input to a line-by-line radiation code, NEQAIR. The NEQAIR code is then used to compute the spatial distribution of volumetric radiance starting from the shock front to the point where thermochemical equilibrium is nominally established. Computations of volumetric spectral radiance assume Boltzmann distributions over radiatively linked electronic states of atoms and molecules. The results of these simulations are compared against experimental data acquired in the X2 facility at the University of Queensland, Australia. The experimental measurements were taken over a spectral range of 310-450 nm where the dominant contributor to radiation is the CN violet band system. In almost all cases, the present approach of computing the spatial variation of post-shock volumetric radiance by applying NEQAIR along a stagnation line computed using a high-fidelity flow solver with good spatial resolution of the relaxation zone is shown to replicate trends in measured relaxation of radiance for both Mars and Titan gas mixtures.

  9. Issues and Perspectives in Species Delimitation using Phenotypic Data: Atlantean Evolution in Darwin's Finches.

    PubMed

    Cadena, Carlos Daniel; Zapata, Felipe; Jiménez, Iván

    2018-03-01

    Progress in the development and use of methods for species delimitation employing phenotypic data lags behind conceptual and practical advances in molecular genetic approaches. The basic evolutionary model underlying the use of phenotypic data to delimit species assumes random mating and quantitative polygenic traits, so that phenotypic distributions within a species should be approximately normal for individuals of the same sex and age. Accordingly, two or more distinct normal distributions of phenotypic traits suggest the existence of multiple species. In light of this model, we show that analytical approaches employed in taxonomic studies using phenotypic data are often compromised by three issues: 1) reliance on graphical analyses that convey little information on phenotype frequencies; 2) exclusion of characters potentially important for species delimitation following reduction of data dimensionality; and 3) use of measures of central tendency to evaluate phenotypic distinctiveness. We outline approaches to overcome these issues based on statistical developments related to normal mixture models (NMMs) and illustrate them empirically with a reanalysis of morphological data recently used to claim that there are no morphologically distinct species of Darwin's ground-finches (Geospiza). We found negligible support for this claim relative to taxonomic hypotheses recognizing multiple species. Although species limits among ground-finches merit further assessments using additional sources of information, our results bear implications for other areas of inquiry including speciation research: because ground-finches have likely speciated and are not trapped in a process of "Sisyphean" evolution as recently argued, they remain useful models to understand the evolutionary forces involved in speciation. Our work underscores the importance of statistical approaches grounded on appropriate evolutionary models for species delimitation. We discuss how NMMs offer new perspectives in the kind of inferences available to systematists, with significant repercussions on ideas about the phenotypic structure of biodiversity.

  10. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  11. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  12. [Effect of different nutritional support on pancreatic secretion in acute pancreatitis].

    PubMed

    Achkasov, E E; Pugaev, A V; Nabiyeva, Zh G; Kalachev, S V

    To develop and justify optimal nutritional support in early phase of acute pancreatitis (AP). 140 AP patients were enrolled. They were divided into groups depending on nutritional support: group I (n=70) - early enteral tube feeding (ETF) with balanced mixtures, group II (n=30) - early ETF with oligopeptide mixture, group III (n=40) - total parenteral nutrition (TPN). The subgroups were also isolated depending on medication: A - Octreotide, B - Quamatel, C - Octreotide + Quamatel. Pancreatic secretion was evaluated by using of course of disease, instrumental methods, APUD-system hormone levels (secretin, cholecystokinin, somatostatin, vasointestinal peptide). ETF was followed by pancreas enlargement despite ongoing therapy, while TPN led to gradual reduction of pancreatic size up to normal values. α-amylase level progressively decreased in all groups, however in patients who underwent ETF (I and II) mean values of the enzyme were significantly higher compared with TPN (group III). Secretin, cholecystokinin and vasointestinal peptide were increasing in most cases, while the level of somatostatin was below normal in all groups. Enteral tube feeding (balanced and oligopeptide mixtures) contributes to pancreatic secretion compared with TPN, but this negative impact is eliminated by antisecretory therapy. Dual medication (Octreotide + Quamatel) is more preferable than monotherapy (Octreotide or Quamatel).

  13. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  14. Studies of satellite and planetary surfaces and atmospheres. [Jupiter, Saturn, and Mars and their satellites

    NASA Technical Reports Server (NTRS)

    Sagan, C.

    1978-01-01

    Completed or published research supported by NASA is summarized. Topics cover limb darkening and the structure of the Jovian atmosphere; the application of generalized inverse theory to the recovery of temperature profiles; models for the reflection spectrum of Jupiter's North Equatorial Belt; isotropic scattering layer models for the red chromosphore on Titan; radiative-convective equilibrium models of the Titan atmosphere; temperature structure and emergent flux of the Jovian planets; occultation of epsilon Geminorum by Mars and the structure and extinction of the Martian upper atmosphere; lunar occultation of Saturn; astrometric results and the normal reflectances of Rhea, Titan, and Iapetus; near limb darkening of solids of planetary interest; scattering light scattering from particulate surfaces; comparing the surface of 10 to laboratory samples; and matching the spectrum of 10: variations in the photometric properties of sulfur-containing mixtures.

  15. Workshop on Algorithms for Time-Series Analysis

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2012-04-01

    abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.

  16. Phase space analysis for anisotropic universe with nonlinear bulk viscosity

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Mumtaz, Saadia

    2018-06-01

    In this paper, we discuss phase space analysis of locally rotationally symmetric Bianchi type I universe model by taking a noninteracting mixture of dust like and viscous radiation like fluid whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. An autonomous system of equations is established by defining normalized dimensionless variables. In order to investigate stability of the system, we evaluate corresponding critical points for different values of the parameters. We also compute power-law scale factor whose behavior indicates different phases of the universe model. It is found that our analysis does not provide a complete immune from fine-tuning because the exponentially expanding solution occurs only for a particular range of parameters. We conclude that stable solutions exist in the presence of nonlinear model for bulk viscosity with different choices of the constant parameter m for anisotropic universe.

  17. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  18. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  20. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  1. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  2. Concrete pavement mixture design and analysis (MDA) : factors influencing drying shrinkage.

    DOT National Transportation Integrated Search

    2014-10-01

    This literature review focuses on factors influencing drying shrinkage of concrete. Although the factors are normally interrelated, they : can be categorized into three groups: paste quantity, paste quality, and other factors.

  3. Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres.

    PubMed

    Banerjee, Arindam; Ghosh, Joydeep

    2004-05-01

    Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.

  4. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  5. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  6. Feasibility of correlating separation of ternary mixtures of neutral analytes via thin layer chromatography with supercritical fluid chromatography in support of green flash separations.

    PubMed

    Ashraf-Khorassani, M; Yan, Q; Akin, A; Riley, F; Aurigemma, C; Taylor, L T

    2015-10-30

    Method development for normal phase flash liquid chromatography traditionally employs preliminary screening using thin layer chromatography (TLC) with conventional solvents on bare silica. Extension to green flash chromatography via correlation of TLC migration results, with conventional polar/nonpolar liquid mixtures, and packed column supercritical fluid chromatography (SFC) retention times, via gradient elution on bare silica with a suite of carbon dioxide mobile phase modifiers, is reported. Feasibility of TLC/SFC correlation is individually described for eight ternary mixtures for a total of 24 neutral analytes. The experimental criteria for TLC/SFC correlation was assumed to be as follows: SFC/UV/MS retention (tR) increases among each of the three resolved mixture components; while, TLC migration (Rf) decreases among the same resolved mixture components. Successful correlation of TLC to SFC was observed for most of the polar organic solvents tested, with the best results observed via SFC on bare silica with methanol as the CO2 modifier and TLC on bare silica with a methanol/dichloromethane mixture. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Antihyperlipidemic Effect of a Polyherbal Mixture in Streptozotocin-Induced Diabetic Rats

    PubMed Central

    Shafiee-Nick, Reza; Rakhshandeh, Hassan; Borji, Abasalt

    2013-01-01

    The effects of a polyherbal mixture containing Allium sativum, Cinnamomum zeylanicum, Citrullus colocynthis, Juglans regia, Nigella sativa, Olea europaea, Punica granatum, Salvia officinalis, Teucrium polium, Trigonella foenum, Urtica dioica, and Vaccinium arctostaphylos were tested on biochemical parameters in diabetic rats. The animals were randomized into three groups: (1) normal control, (2) diabetic control, and (3) diabetic rats which received diet containing 15% (w/w) of this mixture for 4 weeks. Diabetes was induced by intraperitoneal injection of streptozotocin (55 mg/kg). At the end of experiment, the mixture had no significant effect on serum hepatic enzymes, aspartate aminotransferase, and alanine aminotransferase activities. However, the level of fasting blood glucose, water intake, and urine output in treated group was lower than that in diabetic control rats (P < 0.01). Also, the levels of triglyceride and total cholesterol in polyherbal mixture treated rats were significantly lower than those in diabetic control group (P < 0.05). Our results demonstrated that this polyherbal mixture has beneficial effects on blood glucose and lipid profile and it has the potential to be used as a dietary supplement for the management of diabetes. PMID:24383002

  8. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  9. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  10. High/variable mixture ratio O2/H2 engine

    NASA Technical Reports Server (NTRS)

    Adams, A.; Parsley, R. C.

    1988-01-01

    Vehicle/engine analysis studies have identified the High/Dual Mixture Ratio O2/H2 Engine cycle as a leading candidate for an advanced Single Stage to Orbit (SSTO) propulsion system. This cycle is designed to allow operation at a higher than normal O/F ratio of 12 during liftoff and then transition to a more optimum O/F ratio of 6 at altitude. While operation at high mixture ratios lowers specific impulse, the resultant high propellant bulk density and high power density combine to minimize the influence of atmospheric drag and low altitude gravitational forces. Transition to a lower mixture ratio at altitude then provides improved specific impulse relative to a single mixture ratio engine that must select a mixture ratio that is balanced for both low and high altitude operation. This combination of increased altitude specific impulse and high propellant bulk density more than offsets the compromised low altitude performance and results in an overall mission benefit. Two areas of technical concern relative to the execution of this dual mixture ratio cycle concept are addressed. First, actions required to transition from high to low mixture ratio are examined, including an assessment of the main chamber environment as the main chamber mixture ratio passes through stoichiometric. Secondly, two approaches to meet a requirement for high turbine power at high mixture ratio condition are examined. One approach uses high turbine temperature to produce the power and requires cooled turbines. The other approach incorporates an oxidizer-rich preburner to increase turbine work capability via increased turbine mass flow.

  11. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  12. Kinematics and dynamics of salt movement driven by sub-salt normal faulting and supra-salt sediment accumulation - combined analogue experiments and analytical calculations

    NASA Astrophysics Data System (ADS)

    Warsitzka, Michael; Kukowski, Nina; Kley, Jonas

    2017-04-01

    In extensional sedimentary basins, the movement of ductile salt is mainly controlled by the vertical displacement of the salt layer, differential loading due to syn-kinematic deposition, and tectonic shearing at the top and the base of the salt layer. During basement normal faulting, salt either tends to flow downward to the basin centre driven by its own weight or it is squeezed upward due to differential loading. In analogue experiments and analytical models, we address the interplay between normal faulting of the sub-salt basement, compaction and density inversion of the supra-salt cover and the kinematic response of the ductile salt layer. The analogue experiments consist of a ductile substratum (silicone putty) beneath a denser cover layer (sand mixture). Both layers are displaced by normal faults mimicked through a downward moving block within the rigid base of the experimental apparatus and the resulting flow patterns in the ductile layer are monitored and analysed. In the computational models using an analytical approximative solution of the Navier-Stokes equation, the steady-state flow velocity in an idealized natural salt layer is calculated in order to evaluate how flow patterns observed in the analogue experiments can be translated to nature. The analytical calculations provide estimations of the prevailing direction and velocity of salt flow above a sub-salt normal fault. The results of both modelling approaches show that under most geological conditions salt moves downwards to the hanging wall side as long as vertical offset and compaction of the cover layer are small. As soon as an effective average density of the cover is exceeded, the direction of the flow velocity reverses and the viscous material is squeezed towards the elevated footwall side. The analytical models reveal that upward flow occurs even if the average density of the overburden does not exceed the density of salt. By testing various scenarios with different layer thicknesses, displacement rate or lithological parameters of the cover, our models suggest that the reversal of material flow usually requires vertical displacements between 700 and 2000 m. The transition from downward to upward flow occurs at smaller fault displacements, if the initial overburden thickness and the overburden density are high and if sedimentation rate keeps pace with the displacement rate of the sub-salt normal fault.

  13. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project were as follows: (1) From an overall standpoint, the primary accomplishment of this research was the development of a complete gasdynamic-radiatively coupled nonequilibrium viscous shock layer solution method for axisymmetric blunt bodies. This method can be used for rapid engineering modeling of nonequilibrium re-entry flowfields over a wide range of conditions. (2) Another significant accomplishment was the development of an air radiation model that included local thermodynamic nonequilibrium (LTNE) phenomena. (3) As part of this research, three electron-electronic energy models were developed. The first was a quasi-equilibrium electron (QEE) model which determined an effective free electron temperature and assumed that the electronic states were in equilibrium with the free electrons. The second was a quasi-equilibrium electron-electronic (QEEE) model which computed an effective electron-electronic temperature. The third model was a full electron-electronic (FEE) differential equation model which included convective, collisional, viscous, conductive, vibrational coupling, and chemical effects on electron-electronic energy. (4) Since vibration-dissociation coupling phenomena as well as vibrational thermal nonequilibrium phenomena are important in the nonequilibrium zone behind a shock front, a vibrational energy and vibration-dissociation coupling model was developed and included in the flowfield model. This model was a modified coupled vibrational dissociation vibrational (MCVDV) model and also included electron-vibrational coupling. (5) Another accomplishment of the project was the usage of the developed models to investigate radiative heating. (6) A multi-component diffusion model which properly models the multi-component nature of diffusion in complex gas mixtures such as air, was developed and incorporated into the blunt body model. (7) A model was developed to predict the magnitude and characteristics of the shock wave precursor ahead of vehicles entering the Earth's atmosphere. (8) Since considerable data exists for radiating nonequilibrium flow behind normal shock waves, a normal shock wave version of the blunt body code was developed. (9) By comparing predictions from the models and codes with available normal shock data and the flight data of Fire II, it is believed that the developed flowfield and nonequilibrium radiation models have been essentially validated for engineering applications.

  14. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  15. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  16. Differential gene expression pattern in human mammary epithelial cells induced by realistic organochlorine mixtures described in healthy women and in women diagnosed with breast cancer.

    PubMed

    Rivero, Javier; Henríquez-Hernández, Luis Alberto; Luzardo, Octavio P; Pestano, José; Zumbado, Manuel; Boada, Luis D; Valerón, Pilar F

    2016-03-30

    Organochlorine pesticides (OCs) have been associated with breast cancer development and progression, but the mechanisms underlying this phenomenon are not well known. In this work, we evaluated the effects exerted on normal human mammary epithelial cells (HMEC) by the OC mixtures most frequently detected in healthy women (H-mixture) and in women diagnosed with breast cancer (BC-mixture), as identified in a previous case-control study developed in Spain. Cytotoxicity and gene expression profile of human kinases (n=68) and non-kinases (n=26) were tested at concentrations similar to those described in the serum of those cases and controls. Although both mixtures caused a down-regulation of genes involved in the ATP binding process, our results clearly indicate that both mixtures may exert a very different effect on the gene expression profile of HMEC. Thus, while BC-mixture up-regulated the expression of oncogenes associated to breast cancer (GFRA1 and BHLHB8), the H-mixture down-regulated the expression of tumor suppressor genes (EPHA4 and EPHB2). Our results indicate that the composition of the OC mixture could play a role in the initiation processes of breast cancer. In addition, the present results suggest that subtle changes in the composition and levels of pollutants involved in environmentally relevant mixtures might induce very different biological effects, which explain, at least partially, why some mixtures seem to be more carcinogenic than others. Nonetheless, our findings confirm that environmentally relevant pollutants may modulate the expression of genes closely related to carcinogenic processes in the breast, reinforcing the role exerted by environment in the regulation of genes involved in breast carcinogenesis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Miscibility and interactions of animal and plant sterols with choline plasmalogen in binary and multicomponent model systems.

    PubMed

    Hąc-Wydro, Katarzyna; Luty, Katarzyna

    2014-04-01

    In this work miscibility and interactions of sterols with choline plasmalogen (PC-plasm) in Langmuir monolayers were studied. Moreover, the properties of cholesterol/phosphatidylcholine/plasmalogen mixtures of different PC-plasm concentration were investigated. The foregoing systems were treated as a model of cancer cell membranes, which are of higher plasmalogen level than normal cells. Finally, the influence of β-sitosterol and stigmasterol (phytosterols differing in anticancer potency) on these mixtures was verified. The properties of monolayers were analyzed based on the parameters derived from the surface pressure-area isotherms and images taken with Brewster Angle Microscope. It was found that at 30% of sterol in sterol/plasmalogen monolayer the lipids are immiscible and 3D crystallites are formed within the film. Cholesterol molecules mix favorably with PC-plasm at Xchol ≥ 0.5, while the investigated phytosterols only at their prevailing proportion in binary system. The increase of choline plasmalogen in cholesterol/phosphatidylcholine monolayer causes destabilization of the system. Moreover, the incorporation of phytosterols into cholesterol/phosphatidylcholine+PC-plasm mixtures disturbed membrane morphology and this effect was stronger for β-sitosterol as compared to stigmasterol. It was concluded that the presence of vinyl ether bond at sn-1 position in PC-plasm molecule strongly affects miscibility of choline plasmalogen with sterols. The comparison of the collected data with those reported in literature allowed one to conclude that miscibility and interactions of sterols with PC-plasm are less favorable than those with phosphatidylcholine. It was also suggested that overexpression of plasmalogens in cancer cell membranes may be a factor differentiating sensitivity of cells to anticancer effect of phytosterols. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Thermodynamic optimization of mixed refrigerant Joule- Thomson systems constrained by heat transfer considerations

    NASA Astrophysics Data System (ADS)

    Hinze, J. F.; Klein, S. A.; Nellis, G. F.

    2015-12-01

    Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.

  19. The Hb A variant (beta73 Asp-->Leu) disrupts Hb S polymerization by a novel mechanism.

    PubMed

    Adachi, Kazuhiko; Ding, Min; Surrey, Saul; Rotter, Maria; Aprelev, Alexey; Zakharov, Mikhail; Weng, Weijun; Ferrone, Frank A

    2006-09-22

    Polymerization of a 1:1 mixture of hemoglobin S (Hb S) and the artificial mutant HbAbeta73Leu produces a dramatic morphological change in the polymer domains in 1.0 M phosphate buffer that are a characteristic feature of polymer formation. Instead of feathery domains with quasi 2-fold symmetry that characterize polymerization of Hb S and all previously known mixtures such as Hb A/S and Hb F/S mixtures, these domains are compact structures of quasi-spherical symmetry. Solubility of Hb S/Abeta73Leu mixtures was similar to that of Hb S/F mixtures. Kinetics of polymerization indicated that homogeneous nucleation rates of Hb S/Abeta73Leu mixtures were the same as those of Hb S/F mixtures, while exponential polymer growth (B) of Hb S/Abeta73Leu mixtures were about three times slower than those of Hb S/F mixtures. Differential interference contrast (DIC) image analysis also showed that fibers in the mixture appear to elongate between three and five times more slowly than in equivalent Hb S/F mixtures by direct measurements of exponential growth of mass of polymer in a domain. We propose that these results of Hb S/Abeta73Leu mixtures arise from a non-productive binding of the hybrid species of this mixture to the end of the growing polymer. This "cap" prohibits growth of polymers, but by nature is temporary, so that the net effect is a lowered growth rate of polymers. Such a cap is consistent with known features of the structure of the Hb S polymer. Domains would be more spherulitic because slower growth provides more opportunity for fiber bending to spread domains from their initial 2-fold symmetry. Moreover, since monomer depletion proceeds more slowly in this mixture, more homogeneous nucleation events occur, and the resulting gel has a far more granular character than normally seen in mixtures of non-polymerizing hemoglobins with Hb S. This mixture is likely to be less stiff than polymerized mixtures of other hybrids such as Hb S with HbF, potentially providing a novel approach to therapy.

  20. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  1. The toxicity potential of pharmaceuticals found in the Douro River estuary (Portugal)--experimental assessment using a zebrafish embryo test.

    PubMed

    Madureira, Tânia Vieira; Cruzeiro, Catarina; Rocha, Maria João; Rocha, Eduardo

    2011-09-01

    Fish embryos are a particularly vulnerable stage of development, so they represent optimal targets for screening toxicological effects of waterborne xenobiotics. Herein, the toxicity potential of two mixtures of pharmaceuticals was evaluated using a zebrafish embryo test. One of the mixtures corresponds to an environmentally realistic scenario and both have carbamazepine, fenofibric acid, propranolol, trimethoprim and sulfamethoxazole. The results evidenced morphological alterations, such as spinal deformities and yolk-sac oedemas. Moreover, heart rates decreased after both mixture exposures, e.g., at 48hpf, highest mixture versus blank control (47.8±4.9 and 55.8±3.7 beats/30s, respectively). The tail lengths also diminished significantly from 3208±145μm in blank control to 3130±126μm in highest mixture. The toxicological effects were concentration dependent. Mortality, hatching rate and the number of spontaneous movements were not affected. However, the low levels of pharmaceuticals did interfere with the normal development of zebrafish, which indicates risks for wild organisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  3. The aeromedical significance of sickle-cell trait : a review.

    DOT National Transportation Integrated Search

    1976-01-01

    This report present some of the technical background necessary for understanding the aeromedical importance of sickle-cell disease and the sickle-trait carrier, whose erythrocytes contain mixtures of hemoglobin S and normal hemoglobin A. This carrier...

  4. The potential for chemical mixtures from the environment to enable the cancer hallmark of sustained proliferative signalling

    PubMed Central

    Engström, Wilhelm; Darbre, Philippa; Eriksson, Staffan; Gulliver, Linda; Hultman, Tove; Karamouzis, Michalis V.; Klaunig, James E.; Mehta, Rekha; Moorwood, Kim; Sanderson, Thomas; Sone, Hideko; Vadgama, Pankaj; Wagemaker, Gerard; Ward, Andrew; Singh, Neetu; Al-Mulla, Fahd; Al-Temaimi, Rabeah; Amedei, Amedeo; Colacci, Anna Maria; Vaccari, Monica; Mondello, Chiara; Scovassi, A. Ivana; Raju, Jayadev; Hamid, Roslida A.; Memeo, Lorenzo; Forte, Stefano; Roy, Rabindra; Woodrick, Jordan; Salem, Hosni K.; Ryan, Elizabeth; Brown, Dustin G.; Bisson, William H.

    2015-01-01

    The aim of this work is to review current knowledge relating the established cancer hallmark, sustained cell proliferation to the existence of chemicals present as low dose mixtures in the environment. Normal cell proliferation is under tight control, i.e. cells respond to a signal to proliferate, and although most cells continue to proliferate into adult life, the multiplication ceases once the stimulatory signal disappears or if the cells are exposed to growth inhibitory signals. Under such circumstances, normal cells remain quiescent until they are stimulated to resume further proliferation. In contrast, tumour cells are unable to halt proliferation, either when subjected to growth inhibitory signals or in the absence of growth stimulatory signals. Environmental chemicals with carcinogenic potential may cause sustained cell proliferation by interfering with some cell proliferation control mechanisms committing cells to an indefinite proliferative span. PMID:26106143

  5. Catalytic distillation process

    DOEpatents

    Smith, Jr., Lawrence A.

    1982-01-01

    A method for conducting chemical reactions and fractionation of the reaction mixture comprising feeding reactants to a distillation column reactor into a feed zone and concurrently contacting the reactants with a fixed bed catalytic packing to concurrently carry out the reaction and fractionate the reaction mixture. For example, a method for preparing methyl tertiary butyl ether in high purity from a mixed feed stream of isobutene and normal butene comprising feeding the mixed feed stream to a distillation column reactor into a feed zone at the lower end of a distillation reaction zone, and methanol into the upper end of said distillation reaction zone, which is packed with a properly supported cationic ion exchange resin, contacting the C.sub.4 feed and methanol with the catalytic distillation packing to react methanol and isobutene, and concurrently fractionating the ether from the column below the catalytic zone and removing normal butene overhead above the catalytic zone.

  6. Catalytic distillation process

    DOEpatents

    Smith, L.A. Jr.

    1982-06-22

    A method is described for conducting chemical reactions and fractionation of the reaction mixture comprising feeding reactants to a distillation column reactor into a feed zone and concurrently contacting the reactants with a fixed bed catalytic packing to concurrently carry out the reaction and fractionate the reaction mixture. For example, a method for preparing methyl tertiary butyl ether in high purity from a mixed feed stream of isobutene and normal butene comprising feeding the mixed feed stream to a distillation column reactor into a feed zone at the lower end of a distillation reaction zone, and methanol into the upper end of said distillation reaction zone, which is packed with a properly supported cationic ion exchange resin, contacting the C[sub 4] feed and methanol with the catalytic distillation packing to react methanol and isobutene, and concurrently fractionating the ether from the column below the catalytic zone and removing normal butene overhead above the catalytic zone.

  7. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  8. Foetal hypothalamic and pituitary expression of gonadotrophin-releasing hormone and galanin systems is disturbed by exposure to sewage sludge chemicals via maternal ingestion.

    PubMed

    Bellingham, M; Fowler, P A; Amezaga, M R; Whitelaw, C M; Rhind, S M; Cotinot, C; Mandon-Pepin, B; Sharpe, R M; Evans, N P

    2010-06-01

    Animals and humans are chronically exposed to endocrine disrupting chemicals (EDCs) that are ubiquitous in the environment. There are strong circumstantial links between environmental EDC exposure and both declining human/wildlife reproductive health and the increasing incidence of reproductive system abnormalities. The verification of such links, however, is difficult and requires animal models exposed to 'real life', environmentally relevant concentrations/mixtures of environmental contaminants (ECs), particularly in utero, when sensitivity to EC exposure is high. The present study aimed to determine whether the foetal sheep reproductive neuroendocrine axis, particularly gondotrophin-releasing hormone (GnRH) and galaninergic systems, were affected by maternal exposure to a complex mixture of chemicals, applied to pasture, in the form of sewage sludge. Sewage sludge contains high concentrations of a spectrum of EDCs and other pollutants, relative to environmental concentrations, but is frequently recycled to land as a fertiliser. We found that foetuses exposed to the EDC mixture in utero through their mothers had lower GnRH mRNA expression in the hypothalamus and lower GnRH receptor (GnRHR) and galanin receptor (GALR) mRNA expression in the hypothalamus and pituitary gland. Strikingly, this, treatment had no significant effect on maternal GnRH or GnRHR mRNA expression, although GALR mRNA expression within the maternal hypothalamus and pituitary gland was reduced. The present study clearly demonstrates that the developing foetal neuroendocrine axis is sensitive to real-world mixtures of environmental chemicals. Given the important role of GnRH and GnRHR in the regulation of reproductive function, its known role programming role in utero, and the role of galanin in the regulation of many physiological/neuroendocrine systems, in utero changes in the activity of these systems are likely to have long-term consequences in adulthood and represent a novel pathway through which EC mixtures could perturb normal reproductive function.

  9. Studies on drug metabolism by fungi colonizing decomposing human cadavers. Part II: biotransformation of five model drugs by fungi isolated from post-mortem material.

    PubMed

    Martínez-Ramírez, Jorge A; Walther, Grit; Peters, Frank T

    2015-04-01

    The present study investigated the in vitro metabolic capacity of 28 fungal strains isolated from post-mortem material towards five model drugs: amitriptyline, metoprolol, mirtazapine, promethazine, and zolpidem. Each fungal strain was incubated at 25 °C for up to 120 h with each of the five models drugs. Cunninghamella elegans was used as positive control. Aliquots of the incubation mixture were centrifuged and 50 μL of the supernatants were diluted and directly analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) with product ion scanning. The remaining mixture was analyzed by full scan gas chromatography-mass spectrometry (GC-MS) after liquid-liquid extraction and acetylation. The metabolic activity was evaluated through the total number of detected metabolites (NDM) produced in each model and fungal strains and the percentage of parent drug remaining (%RPD) after up to five days of incubation. All the tested fungal strains were capable of forming mammalian phase I metabolites. Fungi from the normal fungal flora of the human body such as Candida sp., Geotrichum candidum, and Trichosporon asahii) formed up to seven metabolites at %RPD values greater than 52% but no new fungal metabolites (NFM). In contrast, some airborne fungal strains like Bjerkandera adusta, Chaetomium sp, Coriolopsis sp., Fusarium solani and Mucor plumbeus showed NDM values exceeding those of the positive control, complete metabolism of the parent drug in some models and formation of NFM. NFM (numbers in brackets) were detected in four of the five model drugs: amitriptyline (18), metoprolol (4), mirtazapine (8), and zolpidem (2). The latter NFM are potential candidates for marker substances indicating post-mortem fungal metabolism. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Estimating fractional vegetation cover and the vegetation index of bare soil and highly dense vegetation with a physically based method

    NASA Astrophysics Data System (ADS)

    Song, Wanjuan; Mu, Xihan; Ruan, Gaiyan; Gao, Zhan; Li, Linyuan; Yan, Guangjian

    2017-06-01

    Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.

  11. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  12. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  13. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  14. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  15. Altered mechano-chemical environment in hip articular cartilage: effect of obesity.

    PubMed

    Travascio, Francesco; Eltoukhy, Moataz; Cami, Sonila; Asfour, Shihab

    2014-10-01

    The production of extracellular matrix (ECM) components of articular cartilage is regulated, among other factors, by an intercellular signaling mechanism mediated by the interaction of cell surface receptors (CSR) with insulin-like growth factor-1 (IGF-1). In ECM, the presence of binding proteins (IGFBP) hinders IGF-1 delivery to CSR. It has been reported that levels of IGF-1 and IGFBP in obese population are, respectively, lower and higher than those found in normal population. In this study, an experimental-numerical approach was adopted to quantify the effect of this metabolic alteration found in obese population on the homeostasis of femoral hip cartilage. A new computational model, based on the mechano-electrochemical mixture theory, was developed to describe competitive binding kinetics of IGF-1 with IGFBP and CSR, and associated glycosaminoglycan (GAG) biosynthesis. Moreover, a gait analysis was carried out on obese and normal subjects to experimentally characterize mechanical loads on hip cartilage during walking. This information was deployed into the model to account for effects of physiologically relevant tissue deformation on GAG production in ECM. Numerical simulations were performed to compare GAG biosynthesis in femoral hip cartilage of normal and obese subjects. Results indicated that the lower ratio of IGF-1 to IGFBP found in obese population reduces cartilage GAG concentration up to 18 % when compared to normal population. Moreover, moderate physical activity, such as walking, has a modest beneficial effect on GAG production. The findings of this study suggest that IGF-1/IGFBP metabolic unbalance should be accounted for when considering the association of obesity with hip osteoarthritis.

  16. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  17. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  18. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Combination of near infrared spectroscopy and chemometrics for authentication of taro flour from wheat and sago flour

    NASA Astrophysics Data System (ADS)

    Rachmawati; Rohaeti, E.; Rafi, M.

    2017-05-01

    Taro flour on the market is usually sold at higher price than wheat and sago flour. This situation could be a cause for adulteration of taro flour from wheat and sago flour. For this reason, we will need an identification and authentication. Combination of near infrared (NIR) spectrum with multivariate analysis was used in this study to identify and authenticate taro flour from wheat and sago flour. The authentication model of taro flour was developed by using a mixture of 5%, 25%, and 50% of adulterated taro flour from wheat and sago flour. Before subjected to multivariate analysis, an initial preprocessing signal was used namely normalization and standard normal variate to the NIR spectrum. We used principal component analysis followed by discriminant analysis to make an identification and authentication model of taro flour. From the result obtained, about 90.48% of the taro flour mixed with wheat flour and 85% of taro flour mixed with sago flour were successfully classified into their groups. So the combination of NIR spectrum with chemometrics could be used for identification and authentication of taro flour from wheat and sago flour.

  20. Co Laser.

    DTIC Science & Technology

    1976-01-01

    Experimental and Pre- 9 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% N2 Mixture 3 Comparison of the Normalized Experimental...and Pre- 10 dieted Temporal Behavior of the Laser Output Pulse for a 20% CO and 80% A~ Mixture 4 Predictions of the Temporal Variation of Small...Z o < z CD o o ÜJ 10 -7 .1 D4862 II i i r~rT"T pco (Torr) ♦ 700 O 350 A 200 O 100 + i & i J I \\ I I I 1.0 AVERAUü

  1. Step-wise supercritical extraction of carbonaceous residua

    DOEpatents

    Warzinski, Robert P.

    1987-01-01

    A method of fractionating a mixture containing high boiling carbonaceous material and normally solid mineral matter includes processing with a plurality of different supercritical solvents. The mixture is treated with a first solvent of high critical temperature and solvent capacity to extract a large fraction as solute. The solute is released as liquid from solvent and successively treated with other supercritical solvents of different critical values to extract fractions of differing properties. Fractionation can be supplemented by solute reflux over a temperature gradient, pressure let down in steps and extractions at varying temperature and pressure values.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  3. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  4. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  5. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  6. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  7. Molecular Simulation of the Vapor-Liquid Phase Behavior of Lennard-Jones Mixtures in Porous Solids

    DTIC Science & Technology

    2006-09-01

    sur la Catalyse, Centre National de la Recherche Scientifique, Group de Chimie Theorique, 2 Avenue Albert Einstein, 69626 Villeurbanne Cedex, France...and Group de Chimie Theorique, Ecole Normale Superieure de Lyon, 46 Allee d’Italie, 69364 Lyon, Cedex 07, France 14. ABSTRACT We present vapor...Scientifique, Group de Chimie Theorique, 2 Avenue Albert Einstein, 69626 Villeurbanne Cedex, France and Group de Chimie Theorique, Ecole Normale

  8. Viscosity Difference Measurements for Normal and Para Liquid Hydrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Webeler, R.; Bedard, F.

    1961-01-01

    The absence of experimental data in the literature concerning a viscosity difference for normal and equilibrium liquid hydrogen may be attributed to the limited reproducibility of "oscillating disk" measurements in a liquid-hydrogen environment. Indeed, there is disagreement over the viscosity values for equilibrium liquid hydrogen even without proton spin considerations. Measurements presented here represent the first application of the piezoelectric alpha quartz torsional oscillator technique to liquid-hydrogen viscosity measurements.

  9. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  10. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  11. Comparison of Efficacy and Toxicity of Traditional Chinese Medicine (TCM) Herbal Mixture LQ and Conventional Chemotherapy on Lung Cancer Metastasis and Survival in Mouse Models

    PubMed Central

    Zhang, Lei; Wu, Chengyu; Zhang, Yong; Liu, Fang; Wang, Xiaoen; Zhao, Ming; Hoffman, Robert M.

    2014-01-01

    Unlike Western medicine that generally uses purified compounds and aims to target a single molecule or pathway, traditional Chinese medicine (TCM) compositions usually comprise multiple herbs and components that are necessary for efficacy. Despite the very long-time and wide-spread use of TCM, there are very few direct comparisons of TCM and standard cytotoxic chemotherapy. In the present report, we compared the efficacy of the TCM herbal mixture LQ against lung cancer in mouse models with doxorubicin (DOX) and cyclophosphamide (CTX). LQ inhibited tumor size and weight measured directly as well as by fluorescent-protein imaging in subcutaneous, orthotopic, spontaneous experimental metastasis and angiogenesis mouse models of lung cancer. LQ was efficacious against primary and metastatic lung cancer without weight loss and organ toxicity. In contrast, CTX and DOX, although efficacious in the lung cancer models caused significant weight loss, and organ toxicity. LQ also had anti-angiogenic activity as observed in lung tumors growing in nestin-driven green fluorescent protein (ND-GFP) transgenic nude mice, which selectively express GFP in nascent blood vessels. Survival of tumor-bearing mice was also prolonged by LQ, comparable to DOX. In vitro, lung cancer cells were killed by LQ as observed by time-lapse imaging, comparable to cisplatinum. LQ was more potent to induce cell death on cancer cell lines than normal cell lines unlike cytotoxic chemotherapy. The results indicate that LQ has non-toxic efficacy against metastatic lung cancer. PMID:25286158

  12. 21 CFR 520.903b - Febantel suspension.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...

  13. 21 CFR 520.903b - Febantel suspension.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...

  14. 21 CFR 520.903b - Febantel suspension.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...

  15. 21 CFR 520.903b - Febantel suspension.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Limitations. Administer by stomach tube or drench, or by mixing well into a portion of the normal grain ration...; administer mixture by stomach tube at rate of 18 milliliters per 100 pounds of body weight. [45 FR 8587, Feb...

  16. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  17. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  18. Current estimates of the cure fraction: a feasibility study of statistical cure for breast and colorectal cancer.

    PubMed

    Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B

    2014-11-01

    The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.

  19. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  1. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  2. Magnetic Reconnection and Modification of the Hall Physics Due to Cold Ions at the Magnetopause

    NASA Technical Reports Server (NTRS)

    Andre, M.; Li, W.; Toldeo-Redondo, S.; Khotyaintsev, Yu. V.; Vaivads, A.; Graham, D. B.; Norgren, C.; Burch, J.; Lindqvist, P.-A.; Marklund, G.; hide

    2016-01-01

    Observations by the four Magnetospheric Multiscale spacecraft are used to investigate the Hall physics of a magnetopause magnetic reconnection separatrix layer. Inside this layer of currents and strong normal electric fields, cold (eV) ions of ionospheric origin can remain frozen-in together with the electrons. The cold ions reduce the Hall current. Using a generalized Ohms law, the electric field is balanced by the sum of the terms corresponding to the Hall current, the v x B drifting cold ions, and the divergence of the electron pressure tensor. A mixture of hot and cold ions is common at the subsolar magnetopause. A mixture of length scales caused by a mixture of ion temperatures has significant effects on the Hall physics of magnetic reconnection.

  3. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  5. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  7. [Effect of Acupoint Injection on Eosinophil Counts,Protein and mRNA Expressions of Eotaxin in Nasal Mucosa of Allergic Rhinitis Rats].

    PubMed

    Zhang, Yuan; Hou, Xun-Rui; Li, Li-Hong; Yang, Meng; Liang, Fei-Hong

    2017-04-25

    To observe the effect of acupoint injection on eosinophils (EOS) counts and expression of eotaxin in nasal mucosa of allergic rhinitis (AR) rats, so as to reveal its mechanism underlying improving AR. Twenty-four Sprague-Dawley (SD) rats were randomly divided into normal, model and acupoint injection groups ( n =8 in each group). The AR model was established by intraperitoneal injection of ovalbumin sensitization. Bilateral "Yingxiang"(LI 20) and "Yintang"(GV 29) were selected for acupoint injection of the mixture solution of lidocaine, dexamethasone, and transfer factor (0.1 mL/acupoint) on the 1 st , 5 th , 9 th , and 13 th day after AR model established, a total of four times. EOS in the nasal mucosa was counted under light microscope after HE staining. Protein and mRNA expressions of eotaxin in the nasal mucosa were detected by immunohistochemical and RT-PCR methods, respectively. Compared with the normal group, EOS counts, protein and mRNA expressions of eotaxin in the nasal mucosa were significantly higher in the model group ( P <0.05). Compared with the model group, EOS counts, protein and mRNA expressions of eotaxin in the nasal mucosa were significantly lower in the acupoint injection group ( P <0.05). Acupoint injection can reduce the nasal mucosa inflammation by suppressing the protein and mRNA expressions of eotaxin, decreasing the infiltration and gathering of EOS in the nasal mucosa.

  8. Rational synthesis of low-polydispersity block copolymer vesicles in concentrated solution via polymerization-induced self-assembly.

    PubMed

    Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P

    2014-08-06

    Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.

  9. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  10. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  11. Toxicities of glyphosate- and cypermethrin-based pesticides are antagonic in the tenspotted livebearer fish (Cnesterodon decemmaculatus).

    PubMed

    Brodeur, Julie Céline; Malpel, Solène; Anglesio, Ana Belén; Cristos, Diego; D'Andrea, María Florencia; Poliserpi, María Belén

    2016-07-01

    Although pesticide contamination of surface waters normally occurs in the form of mixtures, the toxicity and interactions displayed by such mixtures have been little characterized until now. The present study examined the interactions prevailing in equitoxic and non-equitoxic binary mixtures of formulations of glyphosate (Glifoglex(®)) and cypermethrin (Glextrin(®)) to the tenspotted livebearer (Cnesterodon decemmaculatus), a widely distributed South American fish. The following 96 h-LC50s were obtained when pesticide formulations were tested individually: Glifoglex(®) 41.4 and 53 mg ae glyphosate/L; Glextrin(®) 1.89 and 2.60 μg cypermethrin/L. Equitoxic and non-equitoxic mixtures were significantly antagonic in all combinations tested. The magnitude of the antagonism (factor by which toxicity differed from concentration addition) varied between 1.37 and 3.09 times in the different non-equitoxic mixtures tested. Antagonism was due to a strong inhibition of cypermethrin toxicity by the glyphosate formulation, the toxicity of the cypermethrin-based pesticide being almost completely overridden by the glyphosate formulation. Results obtained in the current study with fish are radically opposite to those previously observed in tadpoles where synergy was observed when Glifoglex(®) and Glextrin(®) were present in mixtures (Brodeur et al., 2014). Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  13. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  14. Optimizing the use of natural gravel Brantas river as normal concrete mixed with quality fc = 19.3 Mpa

    NASA Astrophysics Data System (ADS)

    Limantara, A. D.; Widodo, A.; Winarto, S.; Krisnawati, L. D.; Mudjanarko, S. W.

    2018-04-01

    The use of natural gravel (rivers) as concrete mixtures is rarely encountered after days of demands for a higher strength of concrete. Moreover, today people have found High-Performance Concrete which, when viewed from the rough aggregate consisted mostly of broken stone, although the fine grain material still used natural sand. Is it possible that a mixture of concrete using natural gravel as a coarse aggregate is capable of producing concrete with compressive strength equivalent to a concrete mixture using crushed stone? To obtain information on this, a series of tests on concrete mixes with crude aggregates of Kalitelu Crusher, Gondang, Tulungagung and natural stone (river gravel) from the Brantas River, Ngujang, Tulungagung in the Materials Testing Laboratory Tugu Dam Construction Project, Kab. Trenggalek. From concrete strength test results using coarse material obtained value 19.47 Mpa, while the compressive strength of concrete with a mixture of crushed stone obtained the value of 21.12 Mpa.

  15. Effect of water on self-assembled tubules in β-sitosterol + γ-oryzanol-based organogels

    NASA Astrophysics Data System (ADS)

    den Adel, Ruud; Heussen, Patricia C. M.; Bot, Arjen

    2010-10-01

    Mixtures of β-sitosterol and γ-oryzanol form a network in triglyceride oil that may serve as an alternative to the network of small crystallites of triglycerides occurring in regular oil structuring. The present x-ray diffraction study investigates the relation between the crystal forms of the individual compounds and the mixture in oil, water and emulsion. β-Sitosterol and γ-oryzanol form normal crystals in oil, in water, or in emulsions. The crystals are sensitive to the presence of water. The mixture of β-sitosterol + γ-oryzanol forms crystals in water and emulsions that can be traced back to the crystals of the pure compounds. Only in oil, a completely different structure emerges in the mixture of β-sitosterol + γ-oryzanol, which bears no relation to the structures that are formed by both individual compounds, and which can be identified as a self-assembled tubule (diameter 7.2±0.1 nm, wall thickness 0.8±0.2 nm).

  16. Visible-light OCT to quantify retinal oxygen metabolism (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Hao F.; Yi, Ji; Chen, Siyu; Liu, Wenzhong; Soetikno, Brian T.

    2016-03-01

    We explored, both numerically and experimentally, whether OCT can be a good candidate to accurately measure retinal oxygen metabolism. We first used statistical methods to numerically simulate photon transport in the retina to mimic OCT working under different spectral ranges. Then we analyze accuracy of OCT oximetry subject to parameter variations such as vessel size, pigmentation, and oxygenation. We further developed an experimental OCT system based on the spectral range identified by our simulation work. We applied the newly developed OCT to measure both retinal hemoglobin oxygen saturation (sO2) and retinal retinal flow. After obtaining the retinal sO2 and blood velocity, we further measured retinal vessel diameter and calculated the retinal oxygen metabolism rate (MRO2). To test the capability of our OCT, we imaged wild-type Long-Evans rats ventilated with both normal air and air mixtures with various oxygen concentrations. Our simulation suggested that OCT working within visible spectral range is able to provide accurate measurement of retinal MRO2 using inverse Fourier transform spectral reconstruction. We called this newly developed technology vis-OCT, and showed that vis-OCT was able to measure the sO2 value in every single major retinal vessel around the optical disk as well as in micro retinal vessels. When breathing normal air, the averaged sO2 in arterial and venous blood in Long-Evans rats was measured to be 95% and 72%, respectively. When we challenge the rats using air mixtures with different oxygen concentrations, vis-OCT measurement followed analytical models of retinal oxygen diffusion and pulse oximeter well.

  17. VOXEL-LEVEL MAPPING OF TRACER KINETICS IN PET STUDIES: A STATISTICAL APPROACH EMPHASIZING TISSUE LIFE TABLES.

    PubMed

    O'Sullivan, Finbarr; Muzi, Mark; Mankoff, David A; Eary, Janet F; Spence, Alexander M; Krohn, Kenneth A

    2014-06-01

    Most radiotracers used in dynamic positron emission tomography (PET) scanning act in a linear time-invariant fashion so that the measured time-course data are a convolution between the time course of the tracer in the arterial supply and the local tissue impulse response, known as the tissue residue function. In statistical terms the residue is a life table for the transit time of injected radiotracer atoms. The residue provides a description of the tracer kinetic information measurable by a dynamic PET scan. Decomposition of the residue function allows separation of rapid vascular kinetics from slower blood-tissue exchanges and tissue retention. For voxel-level analysis, we propose that residues be modeled by mixtures of nonparametrically derived basis residues obtained by segmentation of the full data volume. Spatial and temporal aspects of diagnostics associated with voxel-level model fitting are emphasized. Illustrative examples, some involving cancer imaging studies, are presented. Data from cerebral PET scanning with 18 F fluoro-deoxyglucose (FDG) and 15 O water (H2O) in normal subjects is used to evaluate the approach. Cross-validation is used to make regional comparisons between residues estimated using adaptive mixture models with more conventional compartmental modeling techniques. Simulations studies are used to theoretically examine mean square error performance and to explore the benefit of voxel-level analysis when the primary interest is a statistical summary of regional kinetics. The work highlights the contribution that multivariate analysis tools and life-table concepts can make in the recovery of local metabolic information from dynamic PET studies, particularly ones in which the assumptions of compartmental-like models, with residues that are sums of exponentials, might not be certain.

  18. Diel periodicity and chronology of upstream migration in yellow-phase American eels (Anguilla rostrata)

    USGS Publications Warehouse

    Aldinger, Joni L.; Welsh, Stuart A.

    2017-01-01

    Yellow-phase American eel (Anguilla rostrata) upstream migration is temporally punctuated, yet migration chronology within diel time periods is not well-understood. This study examined diel periodicity, chronology, and total length (TL) of six multi-day, high-count (285–1,868 eels) passage events of upstream migrant yellow-phase American eels at the Millville Dam eel ladder, lower Shenandoah River, West Virginia during 2011–2014. We categorized passage by diel periods (vespertine, nocturnal, matutinal, diurnal) and season (spring, summer, late summer/early fall, fall). We depicted passage counts as time-series histograms and used time-series spectral analysis (Fast Fourier Transformation) to identify cyclical patterns and diel periodicity of upstream migration. We created histograms to examine movement patterns within diel periods for each passage event and fit normal mixture models (2–9 mixtures) to describe multiple peaks of passage counts. Periodicity of movements for each passage event followed a 24-h activity cycle with mostly nocturnal movement. Multimodal models were supported by the data; most modes represented nocturnal movements, but modes at or near the transition between twilight and night were also common. We used mixed-model methodology to examine relationships among TL, diel period, and season. An additive-effects model of diel period + season was the best approximating model. A decreasing trend of mean TL occurred across diel movement periods, with the highest mean TL occurring during fall relative to similar mean values of TL for spring, summer, and late summer/early fall. This study increased our understanding of yellow-phase American eels by demonstrating the non-random nature of their upstream migration.

  19. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  20. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  1. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  2. 40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...

  3. 40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...

  4. 40 CFR 80.1100 - How is the statutory default requirement for 2006 implemented?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the quantity of fossil fuel present in a fuel mixture used to operate a motor vehicle, and which: (A... more of the fossil fuel normally used in the production of ethanol. (3) Waste derived ethanol means...

  5. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  6. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  7. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  8. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  9. Detection of molecular signatures of oral squamous cell carcinoma and normal epithelium - application of a novel methodology for unsupervised segmentation of imaging mass spectrometry data.

    PubMed

    Widlak, Piotr; Mrukwa, Grzegorz; Kalinowska, Magdalena; Pietrowska, Monika; Chekan, Mykola; Wierzgon, Janusz; Gawin, Marta; Drazek, Grzegorz; Polanska, Joanna

    2016-06-01

    Intra-tumor heterogeneity is a vivid problem of molecular oncology that could be addressed by imaging mass spectrometry. Here we aimed to assess molecular heterogeneity of oral squamous cell carcinoma and to detect signatures discriminating normal and cancerous epithelium. Tryptic peptides were analyzed by MALDI-IMS in tissue specimens from five patients with oral cancer. Novel algorithm of IMS data analysis was developed and implemented, which included Gaussian mixture modeling for detection of spectral components and iterative k-means algorithm for unsupervised spectra clustering performed in domain reduced to a subset of the most dispersed components. About 4% of the detected peptides showed significantly different abundances between normal epithelium and tumor, and could be considered as a molecular signature of oral cancer. Moreover, unsupervised clustering revealed two major sub-regions within expert-defined tumor areas. One of them showed molecular similarity with histologically normal epithelium. The other one showed similarity with connective tissue, yet was markedly different from normal epithelium. Pathologist's re-inspection of tissue specimens confirmed distinct features in both tumor sub-regions: foci of actual cancer cells or cancer microenvironment-related cells prevailed in corresponding areas. Hence, molecular differences detected during automated segmentation of IMS data had an apparent reflection in real structures present in tumor. © 2016 The Authors. Proteomics Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Impact of chemical proportions on the acute neurotoxicity of a mixture of seven carbamates in preweanling and adult rats.

    PubMed

    Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C

    2012-09-01

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.

  12. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  13. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  14. Dielectric relaxation and hydrogen bonding interaction in xylitol-water mixtures using time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Rander, D. N.; Joshi, Y. S.; Kanse, K. S.; Kumbharkhane, A. C.

    2016-01-01

    The measurements of complex dielectric permittivity of xylitol-water mixtures have been carried out in the frequency range of 10 MHz-30 GHz using a time domain reflectometry technique. Measurements have been done at six temperatures from 0 to 25 °C and at different weight fractions of xylitol (0 < W X ≤ 0.7) in water. There are different models to explain the dielectric relaxation behaviour of binary mixtures, such as Debye, Cole-Cole or Cole-Davidson model. We have observed that the dielectric relaxation behaviour of binary mixtures of xylitol-water can be well described by Cole-Davidson model having an asymmetric distribution of relaxation times. The dielectric parameters such as static dielectric constant and relaxation time for the mixtures have been evaluated. The molecular interaction between xylitol and water molecules is discussed using the Kirkwood correlation factor ( g eff ) and thermodynamic parameter.

  15. Effects of Fermented Milk with Mixed Strains as a Probiotic on the Inhibition of Loperamide-Induced Constipation.

    PubMed

    Kim, Byoung-Kook; Choi, In Suk; Kim, Jihee; Han, Sung Hee; Suh, Hyung Joo; Hwang, Jae-Kwan

    2017-01-01

    To investigate the effects of a single bacterium and a mixture of bacteria as probiotics in loperamide-treated animal models, loperamide (3 mg/kg) was administered to SD rats to induce constipation. The individual lactic acid bacterial doses, Enterococcus faecium (EF), Lactobacillus acidophilus (LA), Streptococcus thermophilus (ST), Bifidobacterium bifidum (BB), Bifidobacterium lactis (BL), Pediococcus pentosaceus (PP), and a mixture of the bacteria were orally administered to loperamide-induced constipated rats at a concentration of 10 8 CFU/kg for 14 days. The weights and water contents of their stools were found to be significantly higher in PP, CKDB (mixture of 5 strains except PP), and CKDBP (CKDB+PP) groups than in the normal (constipation not induced) and the control (constipation-induced) groups ( p <0.05). The intestinal transit ratio was significantly higher in all probiotic-treated groups than in the control group, and was the highest in the CKDBP group ( p <0.05). The mucosal length and mucus secretion were significantly improved in all probiotic-treated-groups, as compared to that in the control group, and the CKDBP group was found to be the most effective according to immunohistochemistry (IHC) staining and total short chain fatty acid content analysis ( p <0.05). Lastly, PP, CKDB, and CKDBP showed relatively higher Lactobacillus sp. ratios of 61.94%, 60.31% and 51.94%, respectively, compared to the other groups, based on metagenomic analysis.

  16. Broad Feshbach resonance in the 6Li-40K mixture.

    PubMed

    Tiecke, T G; Goosen, M R; Ludewig, A; Gensemer, S D; Kraft, S; Kokkelmans, S J J M F; Walraven, J T M

    2010-02-05

    We study the widths of interspecies Feshbach resonances in a mixture of the fermionic quantum gases 6Li and 40K. We develop a model to calculate the width and position of all available Feshbach resonances for a system. Using the model, we select the optimal resonance to study the {6}Li/{40}K mixture. Experimentally, we obtain the asymmetric Fano line shape of the interspecies elastic cross section by measuring the distillation rate of 6Li atoms from a potassium-rich 6Li/{40}K mixture as a function of magnetic field. This provides us with the first experimental determination of the width of a resonance in this mixture, DeltaB=1.5(5) G. Our results offer good perspectives for the observation of universal crossover physics using this mass-imbalanced fermionic mixture.

  17. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  19. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  20. Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test.

    PubMed

    Pszczola, Marek; Jaczewski, Mariusz; Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary

    2018-01-10

    Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from -20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis.

  1. Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test

    PubMed Central

    Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary

    2018-01-01

    Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from −20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis. PMID:29320443

  2. 30 CFR 18.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... enough electrical or thermal energy to ignite a flammable mixture of the most easily ignitable composition. Intrinsically safe means incapable of releasing enough electrical or thermal energy under normal... portable cables may be connected to a source of electrical energy, and which contains a short-circuit...

  3. COLLAPSE OF A FISH POPULATION FOLLOWING EXPOSURE TO A SYNTHETIC ESTROGEN

    EPA Science Inventory

    Municipal wastewaters are a complex mixture containing estrogens and estrogen mimics that are known to affect the reproductive health of wild fishes. Male fishes downstream of some wastewater outfalls produce vitellogenin (VTG) (a protein normally synthesized by females during oo...

  4. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  5. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  6. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  7. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    NASA Astrophysics Data System (ADS)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  8. [Phospholipids under combined ozone-oxygen administration].

    PubMed

    Müller-Tyl, E; Hernuss, P; Salzer, H; Reisinger, L; Washüttl, J; Wurst, F

    1975-01-01

    The parenterally application of oxygen-ozone gas mixture gives good resultats in the treatment of various deseases. Ozone seems to influence the metabolic process of fat, so it was of interest to analyse this influence especially to phospholipids. 40 women with gynaecological cancer got 10 ml oxygen-ozone gas mixture with a content of 450 gamma ozone into the cubital vene. Venous blood was removed before and 10 minutes after application and the level of lecithin, lysolecithin, cephalin and spingomyelin was determined by the method of Randerath. A decrease of all four substances was obvious, although all values remained in normal range.

  9. Hydrogen-Helium shock Radiation tests for Saturn Entry Probes

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2016-01-01

    This paper describes the measurement of shock layer radiation in Hydrogen/Helium mixtures representative of that encountered by probes entering the Saturn atmosphere. Normal shock waves are measured in Hydrogen-Helium mixtures (89:11% by volume) at freestream pressures between 13-66 Pa (0.1-0.5 Torr) and velocities from 20-30 km/s. Radiance is quantified from the Vacuum Ultraviolet through Near Infrared. An induction time of several centimeters is observed where electron density and radiance remain well below equilibrium. Radiance is observed in front of the shock layer, the characteristics of which match the expected diffusion length of Hydrogen.

  10. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  11. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    ERIC Educational Resources Information Center

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  12. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  13. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  14. Structure investigations on assembled astaxanthin molecules

    NASA Astrophysics Data System (ADS)

    Köpsel, Christian; Möltgen, Holger; Schuch, Horst; Auweter, Helmut; Kleinermanns, Karl; Martin, Hans-Dieter; Bettermann, Hans

    2005-08-01

    The carotenoid r,r-astaxanthin (3R,3‧R-dihydroxy-4,4‧-diketo-β-carotene) forms different types of aggregates in acetone-water mixtures. H-type aggregates were found in mixtures with a high part of water (e.g. 1:9 acetone-water mixture) whereas two different types of J-aggregates were identified in mixtures with a lower part of water (3:7 acetone-water mixture). These aggregates were characterized by recording UV/vis-absorption spectra, CD-spectra and fluorescence emissions. The sizes of the molecular assemblies were determined by dynamic light scattering experiments. The hydrodynamic diameter of the assemblies amounts 40 nm in 1:9 acetone-water mixtures and exceeds up to 1 μm in 3:7 acetone-water mixtures. Scanning tunneling microscopy monitored astaxanthin aggregates on graphite surfaces. The structure of the H-aggregate was obtained by molecular modeling calculations. The structure was confirmed by calculating the electronic absorption spectrum and the CD-spectrum where the molecular modeling structure was used as input.

  15. Quantification of brain lipids by FTIR spectroscopy and partial least squares regression

    NASA Astrophysics Data System (ADS)

    Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph

    2009-01-01

    Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.

  16. Spin-Imbalanced Quasi-Two-Dimensional Fermi Gases

    NASA Astrophysics Data System (ADS)

    Ong, W.; Cheng, Chingyun; Arakelyan, I.; Thomas, J. E.

    2015-03-01

    We measure the density profiles for a Fermi gas of Li 6 containing N1 spin-up atoms and N2 spin-down atoms, confined in a quasi-two-dimensional geometry. The spatial profiles are measured as a function of spin imbalance N2/N1 and interaction strength, which is controlled by means of a collisional (Feshbach) resonance. The measured cloud radii and central densities are in disagreement with mean-field Bardeen-Cooper-Schrieffer theory for a true two-dimensional system. We find that the data for normal-fluid mixtures are reasonably well fit by a simple two-dimensional polaron model of the free energy. Not predicted by the model is a phase transition to a spin-balanced central core, which is observed above a critical value of N2/N1. Our observations provide important benchmarks for predictions of the phase structure of quasi-two-dimensional Fermi gases.

  17. A preliminary evaluation of immune stimulation following exposure to metal particles and ions using the mouse popliteal lymph node assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tvermoes, Brooke E., E-mail: brooke.tvermoes@cardn

    The objective of this preliminary study was to evaluate the threshold for immune stimulation in mice following local exposure to metal particles and ions representative of normal-functioning cobalt-chromium (CoCr) metal-on-metal (MoM) hip implants. The popliteal lymph node assay (PLNA) was used in this study to assess immune responses in BALB/c mice following treatment with chromium-oxide (Cr{sub 2}O{sub 3}) particles, metal salts (CoCl{sub 2}, CrCl{sub 3} and NiCl{sub 2}), or Cr{sub 2}O{sub 3} particles together with metal salts using single-dose exposures representing approximately 10 days (0.000114 mg), 19 years (0.0800 mg), and 40 years (0.171 mg) of normal implant wear. Themore » immune response elicited following treatment with Cr{sub 2}O{sub 3} particles together with metal salts was also assessed at four additional doses equivalent to approximately 1.5 months (0.0005 mg), 0.6 years (0.0025 mg), 2.3 years (0.01 mg), and 9.3 years (0.04 mg) of normal implant wear. Mice were injected subcutaneously (50 μL) into the right hind foot with the test article, or with the relevant vehicle control. The proliferative response of the draining lymph node cells (LNC) was measured four days after treatment, and stimulation indices (SI) were derived relative to vehicle controls. The PLNA was negative (SI < 3) for all Cr{sub 2}O{sub 3} particle doses, and was also negative at the lowest dose of the metal salt mixture, and the lowest four doses of the Cr{sub 2}O{sub 3} particles with metal salt mixture. The PLNA was positive (SI > 3) at the highest two doses of the metal salt mixture and the highest three doses of the Cr{sub 2}O{sub 3} particles with the metal salt mixture. The provisional NOAEL and LOAEL values identified in this study for immune activation corresponds to Co and Cr concentrations in the synovial fluid approximately 500 and 2000 times higher than that reported for normal-functioning MoM hip implants, respectively. Overall, these results indicate that normal wear conditions are unlikely to result in immune stimulation in individuals not previously sensitized to metals. - Highlights: • Immune responses in mice were assessed following treatment with Cr2O3 particles with metal salts. • The PLNA was negative (SI < 3) for all Cr2O3 particle doses. • A LOAEL for immune activation was identified at 0.04 mg of metal particles with metal salts. • A NOAEL for immune activation was identified at 0.01 mg of metal particles with metal salts.« less

  18. NTP technical report on the toxicity studies of Pesticide/Fertilizer Mixtures Administered in Drinking Water to F344/N Rats and B6C3F1 Mice.

    PubMed

    Yang, R.

    1993-08-01

    Toxicity studies were performed with pesticide and fertilizer mixtures representative of groundwater contamination found in California and Iowa. The California mixture was composed of aldicarb, atrazine, 1,2-dibromo-3-chloropropane, 1,2- dichloropropane, ethylene dibromide, simazine, and ammonium nitrate. The Iowa mixture contained alachlor, atrazine, cyanazine, metolachlor, metribuzin, and ammonium nitrate. The mixtures were administered in drinking water (with 512 ppm propylene glycol) to F344/N rats and B6C3F1 mice of each sex at concentrations ranging from 0.1x to 100x, where 1x represented the median concentrations of the individual chemicals found in studies of groundwater contamination from normal agricultural activities. This report focuses primarily on 26-week toxicity studies describing histopathology, clinical pathology, neurobehavior/neuropathology, and reproductive system effects. The genetic toxicity of the mixtures was assessed by determining the frequency of micronuclei in peripheral blood of mice and evaluating micronuclei and sister chromatid exchanges in splenocytes from female mice and male rats. Additional studies with these mixtures that are briefly reviewed in this report include teratology studies with Sprague-Dawley rats and continuous breeding studies with CD-1 Swiss mice. In 26-week drinking water studies of the California and the Iowa mixtures, all rats (10 per sex and group) survived to the end of the studies, and there were no significant effects on body weight gains. Water consumption was not affected by the pesticide/fertilizer contaminants, and there were no clinical signs of toxicity or neurobehavioral effects as measured by a functional observational battery, motor activity evaluations, thermal sensitivity evaluations, and startle response. There were no clear adverse effects noted in clinical pathology (including serum cholinesterase activity), organ weight, reproductive system, or histopathologic evaluations, although absolute and relative liver weights were marginally increased with increasing exposure concentration in both male and female rats consuming the Iowa mixture. In 26-week drinking water studies in mice, one male receiving the California mixture at 100x died during the study, and one control female and one female in the 100x group in the Iowa mixture study also died early. It could not be determined if the death of either of the mice in the 100x groups was related to consumption of the pesticide/fertilizer mixtures. Water consumption and body weight gains were not affected in these studies, and no signs of toxicity were noted in clinical observations or in neurobehavioral assessments. No clear adverse effects were noted in clinical pathology, reproductive system, organ weight, or histopathologic evaluations of exposed mice. The pesticide/fertilizer mixtures, when tested over a concentration range similar to that used in the 26-week studies, were found to have no effects in teratology studies or in a continuous breeding assay examining reproductive and developmental toxicity. The California and Iowa pesticide mixtures were tested for induction of micronuclei in peripheral blood erythrocytes of female mice. Results of tests with the California mixture were negative. Significant increases in micronucleated normochromatic erythrocytes were seen at the two-highest concentrations (10x and 100x) of the Iowa mixture, but the increases were within the normal range of micronuclei in historical control animals. Splenocytes of male rats and female mice exposed to these mixtures were examined for micronucleus and sister chromatid exchange frequencies. Sister chromatid exchange frequencies were marginally increased in rats and mice receiving the California mixture, but neither species exhibited increased frequencies of micronucleated splenocytes. None of these changes were considered to have biological importance. In summary, studies of potential toxicity associated with the consumption of mixtures of pesticides and a fertilizer representative of groundwater contamination in agriculturative of groundwater contamination in agricultural areas of Iowa and California failed to demonstrate any significant adverse effects in rats or mice receiving the mixtures in drinking water at concentrations as high as 100 times the median concentrations of the individual chemicals determined by groundwater surveys. NOTE: These studies were supported in part by funds from the Comprehensive Environmental Response, Compensation, and Liability Act trust fund (Superfund) by an interagency agreement with the Agency for Toxic Substances and Disease Registry, U.S. Public Health Service.

  19. Glaucomatous patterns in Frequency Doubling Technology (FDT) perimetry data identified by unsupervised machine learning classifiers.

    PubMed

    Bowd, Christopher; Weinreb, Robert N; Balasubramanian, Madhusudhanan; Lee, Intae; Jang, Giljin; Yousefi, Siamak; Zangwill, Linda M; Medeiros, Felipe A; Girkin, Christopher A; Liebmann, Jeffrey M; Goldbaum, Michael H

    2014-01-01

    The variational Bayesian independent component analysis-mixture model (VIM), an unsupervised machine-learning classifier, was used to automatically separate Matrix Frequency Doubling Technology (FDT) perimetry data into clusters of healthy and glaucomatous eyes, and to identify axes representing statistically independent patterns of defect in the glaucoma clusters. FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal FDT results from the UCSD-based Diagnostic Innovations in Glaucoma Study (DIGS) and African Descent and Glaucoma Evaluation Study (ADAGES). For all eyes, VIM input was 52 threshold test points from the 24-2 test pattern, plus age. FDT mean deviation was -1.00 dB (S.D. = 2.80 dB) and -5.57 dB (S.D. = 5.09 dB) in FDT-normal eyes and FDT-abnormal eyes, respectively (p<0.001). VIM identified meaningful clusters of FDT data and positioned a set of statistically independent axes through the mean of each cluster. The optimal VIM model separated the FDT fields into 3 clusters. Cluster N contained primarily normal fields (1109/1190, specificity 93.1%) and clusters G1 and G2 combined, contained primarily abnormal fields (651/786, sensitivity 82.8%). For clusters G1 and G2 the optimal number of axes were 2 and 5, respectively. Patterns automatically generated along axes within the glaucoma clusters were similar to those known to be indicative of glaucoma. Fields located farther from the normal mean on each glaucoma axis showed increasing field defect severity. VIM successfully separated FDT fields from healthy and glaucoma eyes without a priori information about class membership, and identified familiar glaucomatous patterns of loss.

  20. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  1. Compact determination of hydrogen isotopes

    DOE PAGES

    Robinson, David

    2017-04-06

    Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less

  2. Healing effect of sea buckthorn, olive oil, and their mixture on full-thickness burn wounds.

    PubMed

    Edraki, Mitra; Akbarzadeh, Armin; Hosseinzadeh, Massood; Tanideh, Nader; Salehi, Alireza; Koohi-Hosseinabadi, Omid

    2014-07-01

    The purpose of this study is to evaluate the healing effect of silver sulfadiazine (SSD), sea buckthorn, olive oil, and 5% sea buckthorn and olive oil mixture on full-thickness burn wounds with respect to both gross and histopathologic features. Full-thickness burns were induced on 60 rats; the rats were then were divided into 5 groups and treated with sea buckthorn, olive oil, a 5% sea buckthorn/olive oil mixture, SSD, and normal saline (control). They were observed for 28 days, and the wounds' healing process was evaluated. Wound contraction occurred faster in sea buckthorn, olive oil, and the sea buckthorn/olive oil mixture groups compared with the SSD and control groups. The volume of the exudates was controlled more effectively in wounds treated with the sea buckthorn/olive oil mixture. Purulent exudates were observed in the control group, but the others did not show infection. The group treated with sea buckthorn/olive oil mixture revealed more developed re-epithelialization with continuous basement membrane with a mature granulation tissue, whereas the SSD-treated group showed ulceration, necrosis, and immature granulation. The results show that sea buckthorn and olive oil individually are proper dressing for burn wounds and that they also show a synergetic effect when they are used together. A sea buckthorn and olive oil mixture could be considered as an alternative dressing for full-thickness burns because of improved wound healing characteristics and antibacterial property.

  3. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  4. Indices for estimating fractional snow cover in the western Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Shreve, Cheney M.; Okin, Gregory S.; Painter, Thomas H.

    Snow cover in the Tibetan Plateau is highly variable in space and time and plays a key role in ecological processes of this cold-desert ecosystem. Resolution of passive microwave data is too low for regional-scale estimates of snow cover on the Tibetan Plateau, requiring an alternate data source. Optically derived snow indices allow for more accurate quantification of snow cover using higher-resolution datasets subject to the constraint of cloud cover. This paper introduces a new optical snow index and assesses four optically derived MODIS snow indices using Landsat-based validation scenes: MODIS Snow-Covered Area and Grain Size (MODSCAG), Relative Multiple Endmember Spectral Mixture Analysis (RMESMA), Relative Spectral Mixture Analysis (RSMA) and the normalized-difference snow index (NDSI). Pearson correlation coefficients were positively correlated with the validation datasets for all four optical snow indices, suggesting each provides a good measure of total snow extent. At the 95% confidence level, linear least-squares regression showed that MODSCAG and RMESMA had accuracy comparable to validation scenes. Fusion of optical snow indices with passive microwave products, which provide snow depth and snow water equivalent, has the potential to contribute to hydrologic and energy-balance modeling in the Tibetan Plateau.

  5. Gaussian mixture models for detection of autism spectrum disorders (ASD) in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Almeida, Javier; Velasco, Nelson; Alvarez, Charlens; Romero, Eduardo

    2017-11-01

    Autism Spectrum Disorder (ASD) is a complex neurological condition characterized by a triad of signs: stereotyped behaviors, verbal and non-verbal communication problems. The scientific community has been interested on quantifying anatomical brain alterations of this disorder. Several studies have focused on measuring brain cortical and sub-cortical volumes. This article presents a fully automatic method which finds out differences among patients diagnosed with autism and control patients. After the usual pre-processing, a template (MNI152) is registered to an evaluated brain which becomes then a set of regions. Each of these regions is the represented by the normalized histogram of intensities which is approximated by mixture of Gaussian (GMM). The gray and white matter are separated to calculate the mean and standard deviation of each Gaussian. These features are then used to train, region per region, a binary SVM classifier. The method was evaluated in an adult population aged from 18 to 35 years, from the public database Autism Brain Imaging Data Exchange (ABIDE). Highest discrimination values were found for the Right Middle Temporal Gyrus, with an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) the curve of 0.72.

  6. Effect of solvent quality on aggregate structures of common surfactants.

    PubMed

    Hollamby, Martin J; Tabor, Rico; Mutch, Kevin J; Trickett, Kieran; Eastoe, Julian; Heenan, Richard K; Grillo, Isabelle

    2008-11-04

    Aggregate structures of two model surfactants, AOT and C12E5 are studied in pure solvents D2O, dioxane-d8 (d-diox) and cyclohexane-d12 (C6D12) as well as in formulated D2O/d-diox and d-diox/C6D12 mixtures. As such these solvents and mixtures span a wide and continuous range of polarities. Small-angle neutron scattering (SANS) has been employed to follow an evolution of the preferred aggregate curvature, from normal micelles in high polarity solvents, through to reversed micelles in low polarity media. SANS has also been used to elucidate the micellar size, shape as well as to highlight intermicellar interactions. The results shed new light on the nature of aggregation structures in intermediate polarity solvents, and point to a region of solvent quality (as characterized by Hildebrand Solubility Parameter, Snyder polarity parameter or dielectric constant) in which aggregation is not favored. Finally these observed trends in aggregation as a function of solvent quality are successfully used to predict the self-assembly behavior of C12E5 in a different solvent, hexane-d14 (C6D14).

  7. DNS study of speed of sound in two-phase flows with phase change

    NASA Astrophysics Data System (ADS)

    Fu, Kai; Deng, Xiaolong

    2017-11-01

    Heat transfer through pipe flow is important for the safety of thermal power plants. Normally it is considered incompressible. However, in some conditions compressibility effects could deteriorate the heat transfer efficiency and even result in pipe rupture, especially when there is obvious phase change, due to the much lower sound speed in liquid-gas mixture flows. Based on the stratified multiphase flow model (Chang and Liou, JCP 2007), we present a new approach to simulate the sound speed in 3-D compressible two-phase dispersed flows, in which each face is divided into gas-gas, gas-liquid, and liquid-liquid parts via reconstruction by volume fraction, and fluxes are calculated correspondingly. Applying it to well-distributed air-water bubbly flows, comparing with the experiment measurements in air water mixture (Karplus, JASA 1957), the effects of adiabaticity, viscosity, and isothermality are examined. Under viscous and isothermal condition, the simulation results match the experimental ones very well, showing the DNS study with current method is an effective way for the sound speed of complex two-phase dispersed flows. Including the two-phase Riemann solver with phase change (Fechter et al., JCP 2017), more complex problems can be numerically studied.

  8. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  9. Learning coefficient of generalization error in Bayesian estimation and vandermonde matrix-type singularity.

    PubMed

    Aoyagi, Miki; Nagata, Kenji

    2012-06-01

    The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.

  10. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  11. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  12. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Numerical trials of HISSE

    NASA Technical Reports Server (NTRS)

    Peters, C.; Kampe, F. (Principal Investigator)

    1980-01-01

    The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.

  14. Normal shock wave reflection on porous compressible material

    NASA Astrophysics Data System (ADS)

    Gvozdeva, L. G.; Faresov, Iu. M.; Brossard, J.; Charpentier, N.

    The present experimental investigation of the interaction of plane shock waves in air and a rigid wall coated with flat layers of expanded polymers was conducted in a standard shock tube and a diaphragm with an initial test section pressure of 100,000 Pa. The Mach number of the incident shock wave was varied from 1.1 to 2.7; the peak pressures measured on the wall behind polyurethane at various incident wave Mach numbers are compared with calculated values, with the ideal model of propagation, and with the reflection of shock waves in a porous material that is understood as a homogeneous mixture. The effect of elasticity and permeability of the porous material structure on the rigid wall's pressure pulse parameters is qualitatively studied.

  15. Effect of Penetration Enhancers on the Percuaneous Delivery of Hormone Replacement Actives.

    PubMed

    Trimble, John O; Light, Bob

    2017-01-01

    Transdermal compositions for hormone replacement are comprised of exogenous hormones that are biochemically similar to those produced endogenously by the ovaries or elsewhere in the body. In this work, estradiol, estriol, and testosterone were loaded in transdermal vehicles, prepared using one of three selected penetration enhancer mixtures: Vehicle 1 (olive oil and oleic acid), Vehicle 2 (isopropyl palmitate and lecithin), and Vehicle 3 (isopropyl myristate and lecithin). The influence of penetration enhancers on transdermal delivery was evaluated using Franz-type diffusion cells and Normal Human 3D Model of Epidermal Tissue. Results showed that drug delivery is affected by the penetration enhancer used in the transdermal composition. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  16. Superconducting states of topological surface states in β-PdBi2 investigated by STM/STS

    NASA Astrophysics Data System (ADS)

    Iwaya, Katsuya; Okawa, Kenjiro; Hanaguri, Tetsuo; Kohsaka, Yuhki; Machida, Tadashi; Sasagawa, Takao

    We investigate superconducting (SC) states of topological surface states in β-PdBi2 using very low temperature STM. Characteristic quasiparticle interference patterns strongly support the existence of the spin-polarized surface states at the Fermi level in the normal state. A fully-opened SC gap well described by the conventional BCS model is observed, indicating the SC gap opening at the spin-polarized Fermi surfaces. Considering a possible mixing of odd- and even parity orbital functions in C4v group symmetry lowered from D4h near the surface, we suggest that the SC gap consists of the mixture of s- and p-wave SC gap functions in the two-dimensional state.

  17. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  18. Guidelines for determining the capacity of d-regions with premature concrete deterioration of ASR/DEF.

    DOT National Transportation Integrated Search

    2012-11-01

    When a bridge engineer encounters a design or analysis problem concerning a bridge substructure, that structure will commonly have a mixture of member types, some slender, and some squat. Slender members are generally governed by flexure, and normal ...

  19. Variable Screening for Cluster Analysis.

    ERIC Educational Resources Information Center

    Donoghue, John R.

    Inclusion of irrelevant variables in a cluster analysis adversely affects subgroup recovery. This paper examines using moment-based statistics to screen variables; only variables that pass the screening are then used in clustering. Normal mixtures are analytically shown often to possess negative kurtosis. Two related measures, "m" and…

  20. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

Top