Sample records for mixture modeling identified

  1. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  2. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  3. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  4. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  5. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  6. New theoretical framework for designing nonionic surfactant mixtures that exhibit a desired adsorption kinetics behavior.

    PubMed

    Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel

    2010-12-21

    How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.

  7. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  8. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  9. Methods and Measures: Growth Mixture Modeling--A Method for Identifying Differences in Longitudinal Change among Unobserved Groups

    ERIC Educational Resources Information Center

    Ram, Nilam; Grimm, Kevin J.

    2009-01-01

    Growth mixture modeling (GMM) is a method for identifying multiple unobserved sub-populations, describing longitudinal change within each unobserved sub-population, and examining differences in change among unobserved sub-populations. We provide a practical primer that may be useful for researchers beginning to incorporate GMM analysis into their…

  10. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  11. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  12. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  13. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    PubMed

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  14. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  15. A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability

    ERIC Educational Resources Information Center

    Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H.Hill

    2016-01-01

    Twin factor mixture modeling was used to identify temperament profiles while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (M[subscript age] = 7.4 years, SD = 0.84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A four-profile, one-factor model fit the data well.…

  16. Detecting Social Desirability Bias Using Factor Mixture Models

    ERIC Educational Resources Information Center

    Leite, Walter L.; Cooper, Lou Ann

    2010-01-01

    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  20. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  1. Estimating abundance in the presence of species uncertainty

    USGS Publications Warehouse

    Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.

    2016-01-01

    1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.

  2. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  4. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  5. Mixture Rasch model for guessing group identification

    NASA Astrophysics Data System (ADS)

    Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling

    2013-04-01

    Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.

  6. Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley

    2016-09-01

    Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.

  7. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  8. MixGF: spectral probabilities for mixture spectra from more than one peptide.

    PubMed

    Wang, Jian; Bourne, Philip E; Bandeira, Nuno

    2014-12-01

    In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30-390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  9. MixGF: Spectral Probabilities for Mixture Spectra from more than One Peptide*

    PubMed Central

    Wang, Jian; Bourne, Philip E.; Bandeira, Nuno

    2014-01-01

    In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30–390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. PMID:25225354

  10. Mixture experiment methods in the development and optimization of microemulsion formulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.

    2011-06-25

    Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less

  11. The utility of estimating population-level trajectories of terminal wellbeing decline within a growth mixture modelling framework.

    PubMed

    Burns, R A; Byles, J; Magliano, D J; Mitchell, P; Anstey, K J

    2015-03-01

    Mortality-related decline has been identified across multiple domains of human functioning, including mental health and wellbeing. The current study utilised a growth mixture modelling framework to establish whether a single population-level trajectory best describes mortality-related changes in both wellbeing and mental health, or whether subpopulations report quite different mortality-related changes. Participants were older-aged (M = 69.59 years; SD = 8.08 years) deceased females (N = 1,862) from the dynamic analyses to optimise ageing (DYNOPTA) project. Growth mixture models analysed participants' responses on measures of mental health and wellbeing for up to 16 years from death. Multi-level models confirmed overall terminal decline and terminal drop in both mental health and wellbeing. However, modelling data from the same participants within a latent class growth mixture framework indicated that most participants reported stability in mental health (90.3 %) and wellbeing (89.0 %) in the years preceding death. Whilst confirming other population-level analyses which support terminal decline and drop hypotheses in both mental health and wellbeing, we subsequently identified that most of this effect is driven by a small, but significant minority of the population. Instead, most individuals report stable levels of mental health and wellbeing in the years preceding death.

  12. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  13. Structure investigations on assembled astaxanthin molecules

    NASA Astrophysics Data System (ADS)

    Köpsel, Christian; Möltgen, Holger; Schuch, Horst; Auweter, Helmut; Kleinermanns, Karl; Martin, Hans-Dieter; Bettermann, Hans

    2005-08-01

    The carotenoid r,r-astaxanthin (3R,3‧R-dihydroxy-4,4‧-diketo-β-carotene) forms different types of aggregates in acetone-water mixtures. H-type aggregates were found in mixtures with a high part of water (e.g. 1:9 acetone-water mixture) whereas two different types of J-aggregates were identified in mixtures with a lower part of water (3:7 acetone-water mixture). These aggregates were characterized by recording UV/vis-absorption spectra, CD-spectra and fluorescence emissions. The sizes of the molecular assemblies were determined by dynamic light scattering experiments. The hydrodynamic diameter of the assemblies amounts 40 nm in 1:9 acetone-water mixtures and exceeds up to 1 μm in 3:7 acetone-water mixtures. Scanning tunneling microscopy monitored astaxanthin aggregates on graphite surfaces. The structure of the H-aggregate was obtained by molecular modeling calculations. The structure was confirmed by calculating the electronic absorption spectrum and the CD-spectrum where the molecular modeling structure was used as input.

  14. Mixture experiment methods in the development and optimization of microemulsion formulations.

    PubMed

    Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P

    2011-06-25

    Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Identifying when tagged fishes have been consumed by piscivorous predators: application of multivariate mixture models to movement parameters of telemetered fishes

    USGS Publications Warehouse

    Romine, Jason G.; Perry, Russell W.; Johnston, Samuel V.; Fitzer, Christopher W.; Pagliughi, Stephen W.; Blake, Aaron R.

    2013-01-01

    Mixture models proved valuable as a means to differentiate between salmonid smolts and predators that consumed salmonid smolts. However, successful application of this method requires that telemetered fishes and their predators exhibit measurable differences in movement behavior. Our approach is flexible, allows inclusion of multiple track statistics and improves upon rule-based manual classification methods.

  16. Automated deconvolution of structured mixtures from heterogeneous tumor genomic data

    PubMed Central

    Roman, Theodore; Xie, Lu

    2017-01-01

    With increasing appreciation for the extent and importance of intratumor heterogeneity, much attention in cancer research has focused on profiling heterogeneity on a single patient level. Although true single-cell genomic technologies are rapidly improving, they remain too noisy and costly at present for population-level studies. Bulk sequencing remains the standard for population-scale tumor genomics, creating a need for computational tools to separate contributions of multiple tumor clones and assorted stromal and infiltrating cell populations to pooled genomic data. All such methods are limited to coarse approximations of only a few cell subpopulations, however. In prior work, we demonstrated the feasibility of improving cell type deconvolution by taking advantage of substructure in genomic mixtures via a strategy called simplicial complex unmixing. We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data, as well as the ability to process quantitative RNA expression data, and heterogeneous combinations of RNA and CNV data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We further demonstrate their effectiveness in identifying mixture substructure in true breast cancer CNV data from the Cancer Genome Atlas (TCGA). Source code is available at https://github.com/tedroman/WSCUnmix PMID:29059177

  17. A Two-Locus Model of the Evolution of Insecticide Resistance to Inform and Optimise Public Health Insecticide Deployment Strategies

    PubMed Central

    2017-01-01

    We develop a flexible, two-locus model for the spread of insecticide resistance applicable to mosquito species that transmit human diseases such as malaria. The model allows differential exposure of males and females, allows them to encounter high or low concentrations of insecticide, and allows selection pressures and dominance values to differ depending on the concentration of insecticide encountered. We demonstrate its application by investigating the relative merits of sequential use of insecticides versus their deployment as a mixture to minimise the spread of resistance. We recover previously published results as subsets of this model and conduct a sensitivity analysis over an extensive parameter space to identify what circumstances favour mixtures over sequences. Both strategies lasted more than 500 mosquito generations (or about 40 years) in 24% of runs, while in those runs where resistance had spread to high levels by 500 generations, 56% favoured sequential use and 44% favoured mixtures. Mixtures are favoured when insecticide effectiveness (their ability to kill homozygous susceptible mosquitoes) is high and exposure (the proportion of mosquitoes that encounter the insecticide) is low. If insecticides do not reliably kill homozygous sensitive genotypes, it is likely that sequential deployment will be a more robust strategy. Resistance to an insecticide always spreads slower if that insecticide is used in a mixture although this may be insufficient to outperform sequential use: for example, a mixture may last 5 years while the two insecticides deployed individually may last 3 and 4 years giving an overall ‘lifespan’ of 7 years for sequential use. We emphasise that this paper is primarily about designing and implementing a flexible modelling strategy to investigate the spread of insecticide resistance in vector populations and demonstrate how our model can identify vector control strategies most likely to minimise the spread of insecticide resistance. PMID:28095406

  18. Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results?

    PubMed Central

    Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956

  19. Use of a combined effect model approach for discriminating between ABCB1- and ABCC1-type efflux activities in native bivalve gill tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, Melissa; CESAM & Departamento de Biologia, Universidade de Aveiro, 3810-193 Aveiro; Pavlichenko, Vasiliy

    Aquatic organisms, such as bivalves, employ ATP binding cassette (ABC) transporters for efflux of potentially toxic chemicals. Anthropogenic water contaminants can, as chemosensitizers, disrupt efflux transporter function enabling other, putatively toxic compounds to enter the organism. Applying rapid amplification of cDNA ends (RACE) PCR we identified complete cDNAs encoding ABCB1- and ABCC1-type transporter homologs from zebra mussel providing the molecular basis for expression of both transporter types in zebra mussel gills. Further, efflux activities of both transporter types in gills were indicated with dye accumulation assays where efflux of the dye calcein-am was sensitive to both ABCB1- (reversin 205, verapamil)more » and ABCC1- (MK571) type specific inhibitors. The assumption that different inhibitors targeted different efflux pump types was confirmed when comparing measured effects of binary inhibitor compound mixtures in dye accumulation assays with predictions from mixture effect models. Effects by the MK571/reversin 205 mixture corresponded better with independent action, whereas reversin 205/verapamil joint effects were better predicted by the concentration addition model indicating different and equal targets, respectively. The binary mixture approach was further applied to identify the efflux pump type targeted by environmentally relevant chemosensitizing compounds. Pentachlorophenol and musk ketone, which were selected after a pre-screen of twelve compounds that previously had been identified as chemosensitizers, showed mixture effects that corresponded better with concentration addition when combined with reversine 205 but with independent action predictions when combined with MK571 indicating targeting of an ABCB1-type efflux pump by these compounds. - Highlights: • Sequences and function of ABC efflux transporters in bivalve gills were explored. • Full length Dreissena polymorpha abcb1 and abcc1 cDNA sequences were identified. • A mixture effect design with inhibitors was applied in transporter activity assays. • ABCB1- and ABCC-type efflux activities were distinguished in native gill tissue. • Inhibitory action of environmental chemicals targeted ABCB1-type efflux activity.« less

  20. Mixture models with entropy regularization for community detection in networks

    NASA Astrophysics Data System (ADS)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  1. Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.

    PubMed

    Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A

    2015-07-01

    Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Fingerprinting selection for agroenvironmental catchment studies: EDXRF analysis for solving complex artificial mixtures

    NASA Astrophysics Data System (ADS)

    Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel

    2017-04-01

    Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.

  3. An introduction to mixture item response theory models.

    PubMed

    De Ayala, R J; Santiago, S Y

    2017-02-01

    Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  4. Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.

    PubMed

    Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C

    2014-03-01

    To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  5. Combined acute ecotoxicity of malathion and deltamethrin to Daphnia magna (Crustacea, Cladocera): comparison of different data analysis approaches.

    PubMed

    Toumi, Héla; Boumaiza, Moncef; Millet, Maurice; Radetski, Claudemir Marcos; Camara, Baba Issa; Felten, Vincent; Masfaraud, Jean-François; Férard, Jean-François

    2018-04-19

    We studied the combined acute effect (i.e., after 48 h) of deltamethrin (a pyrethroid insecticide) and malathion (an organophosphate insecticide) on Daphnia magna. Two approaches were used to examine the potential interaction effects of eight mixtures of deltamethrin and malathion: (i) calculation of mixture toxicity index (MTI) and safety factor index (SFI) and (ii) response surface methodology coupled with isobole-based statistical model (using generalized linear model). According to the calculation of MTI and SFI, one tested mixture was found additive while the two other tested mixtures were found no additive (MTI) or antagonistic (SFI), but these differences between index responses are only due to differences in terminology related to these two indexes. Through the surface response approach and isobologram analysis, we concluded that there was a significant antagonistic effect of the binary mixtures of deltamethrin and malathion that occurs on D. magna immobilization, after 48 h of exposure. Index approaches and surface response approach with isobologram analysis are complementary. Calculation of mixture toxicity index and safety factor index allows identifying punctually the type of interaction for several tested mixtures, while the surface response approach with isobologram analysis integrates all the data providing a global outcome about the type of interactive effect. Only the surface response approach and isobologram analysis allowed the statistical assessment of the ecotoxicological interaction. Nevertheless, we recommend the use of both approaches (i) to identify the combined effects of contaminants and (ii) to improve risk assessment and environmental management.

  6. Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes

    PubMed Central

    Brown, Colin D.; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart

    2018-01-01

    Abstract Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant‐protection products applied annually for 20 yr and at a watershed scale for a mixed land‐use scenario over 30 yr with 12 plant‐protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real‐world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario‐based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674–689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:29193235

  7. Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes.

    PubMed

    Holmes, Christopher M; Brown, Colin D; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart

    2018-03-01

    Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant-protection products applied annually for 20 yr and at a watershed scale for a mixed land-use scenario over 30 yr with 12 plant-protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real-world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario-based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674-689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.

  8. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    PubMed

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)

  10. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  11. Determining of migraine prognosis using latent growth mixture models.

    PubMed

    Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan

    2011-04-01

    This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.

  12. Gene selection and cancer type classification of diffuse large-B-cell lymphoma using a bivariate mixture model for two-species data.

    PubMed

    Su, Yuhua; Nielsen, Dahlia; Zhu, Lei; Richards, Kristy; Suter, Steven; Breen, Matthew; Motsinger-Reif, Alison; Osborne, Jason

    2013-01-05

    : A bivariate mixture model utilizing information across two species was proposed to solve the fundamental problem of identifying differentially expressed genes in microarray experiments. The model utility was illustrated using a dog and human lymphoma data set prepared by a group of scientists in the College of Veterinary Medicine at North Carolina State University. A small number of genes were identified as being differentially expressed in both species and the human genes in this cluster serve as a good predictor for classifying diffuse large-B-cell lymphoma (DLBCL) patients into two subgroups, the germinal center B-cell-like diffuse large B-cell lymphoma and the activated B-cell-like diffuse large B-cell lymphoma. The number of human genes that were observed to be significantly differentially expressed (21) from the two-species analysis was very small compared to the number of human genes (190) identified with only one-species analysis (human data). The genes may be clinically relevant/important, as this small set achieved low misclassification rates of DLBCL subtypes. Additionally, the two subgroups defined by this cluster of human genes had significantly different survival functions, indicating that the stratification based on gene-expression profiling using the proposed mixture model provided improved insight into the clinical differences between the two cancer subtypes.

  13. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression.

    PubMed

    Lemieux, Sébastien

    2006-08-25

    The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  14. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    PubMed

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  15. Accurate Identification of Unknown and Known Metabolic Mixture Components by Combining 3D NMR with Fourier Transform Ion Cyclotron Resonance Tandem Mass Spectrometry.

    PubMed

    Wang, Cheng; He, Lidong; Li, Da-Wei; Bruschweiler-Li, Lei; Marshall, Alan G; Brüschweiler, Rafael

    2017-10-06

    Metabolite identification in metabolomics samples is a key step that critically impacts downstream analysis. We recently introduced the SUMMIT NMR/mass spectrometry (MS) hybrid approach for the identification of the molecular structure of unknown metabolites based on the combination of NMR, MS, and combinatorial cheminformatics. Here, we demonstrate the feasibility of the approach for an untargeted analysis of both a model mixture and E. coli cell lysate based on 2D/3D NMR experiments in combination with Fourier transform ion cyclotron resonance MS and MS/MS data. For 19 of the 25 model metabolites, SUMMIT yielded complete structures that matched those in the mixture independent of database information. Of those, seven top-ranked structures matched those in the mixture, and four of those were further validated by positive ion MS/MS. For five metabolites, not part of the 19 metabolites, correct molecular structural motifs could be identified. For E. coli, SUMMIT MS/NMR identified 20 previously known metabolites with three or more 1 H spins independent of database information. Moreover, for 15 unknown metabolites, molecular structural fragments were determined consistent with their spin systems and chemical shifts. By providing structural information for entire metabolites or molecular fragments, SUMMIT MS/NMR greatly assists the targeted or untargeted analysis of complex mixtures of unknown compounds.

  16. A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability

    PubMed Central

    Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H. Hill

    2016-01-01

    Twin factor mixture modeling was used to identify temperament profiles, while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (Mage =7.4 years; SD = .84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A 4-profile, 1-factor model fit the data well. Profiles included ‘Regulated, Typical Reactive’, ‘Well-regulated, Positive Reactive’, ‘Regulated, Surgent’, and ‘Dysregulated, Negative Reactive.’ All profiles were heritable, with heritability lower and shared environment also contributing to membership in the ‘Regulated, Typical Reactive’ and ‘Dysregulated, Negative Reactive’ profiles. PMID:27291568

  17. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  18. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    PubMed Central

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2 Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual’s total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3 In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence’s AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. A portion of these differences are due to the nature of the convenience (RIOPA) and representative (NHANES) sampling strategies used in the two studies. CONCLUSIONS Accurate models for exposure data, which can feature extreme values, multiple modes, data below the MDL, heterogeneous interpollutant dependency structures, and other complex characteristics, are needed to estimate exposures and risks and to develop control and management guidelines and policies. Conventional and novel statistical methods were applied to data drawn from two large studies to understand the nature and significance of VOC exposures. Both extreme value distributions and mixture models were found to provide excellent fit to single VOC compounds (univariate distributions), and copulas may be the method of choice for VOC mixtures (multivariate distributions), especially for the highest exposures, which fit parametric models poorly and which may represent the greatest health risk. The identification of exposure determinants, including the influence of both certain activities (e.g., pumping gas) and environments (e.g., residences), provides information that can be used to manage and reduce exposures. The results obtained using the RIOPA data set add to our understanding of VOC exposures and further investigations using a more representative population and a wider suite of VOCs are suggested to extend and generalize results. PMID:25145040

  19. Identifying Aerosol Type/Mixture from Aerosol Absorption Properties Using AERONET

    NASA Technical Reports Server (NTRS)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Sinyuk, A.; Dickerson, R. R.; Thompson, A. M.; Slutsker, I.; Li, Z.; Tripathi, S. N.; Singh, R. P.; hide

    2010-01-01

    Aerosols are generated in the atmosphere through anthropogenic and natural mechanisms. These sources have signatures in the aerosol optical and microphysical properties that can be used to identify the aerosol type/mixture. Spectral aerosol absorption information (absorption Angstrom exponent; AAE) used in conjunction with the particle size parameterization (extinction Angstrom exponent; EAE) can only identify the dominant absorbing aerosol type in the sample volume (e.g., black carbon vs. iron oxides in dust). This AAE/EAE relationship can be expanded to also identify non-absorbing aerosol types/mixtures by applying an absorption weighting. This new relationship provides improved aerosol type distinction when the magnitude of absorption is not equal (e.g, black carbon vs. sulfates). The Aerosol Robotic Network (AERONET) data provide spectral aerosol optical depth and single scattering albedo - key parameters used to determine EAE and AAE. The proposed aerosol type/mixture relationship is demonstrated using the long-term data archive acquired at AERONET sites within various source regions. The preliminary analysis has found that dust, sulfate, organic carbon, and black carbon aerosol types/mixtures can be determined from this AAE/EAE relationship when applying the absorption weighting for each available wavelength (Le., 440, 675, 870nm). Large, non-spherical dust particles absorb in the shorter wavelengths and the application of 440nm wavelength absorption weighting produced the best particle type definition. Sulfate particles scatter light efficiently and organic carbon particles are small near the source and aggregate over time to form larger less absorbing particles. Both sulfates and organic carbon showed generally better definition using the 870nm wavelength absorption weighting. Black carbon generation results from varying combustion rates from a number of sources including industrial processes and biomass burning. Cases with primarily black carbon showed improved definition in the 870nm wavelength absorption weighting due to the increased absorption in the near-infrared wavelengths, while the 440nm wavelength provided better definition when black carbon mixed with dust. Utilization of this particle type scheme provides necessary information for remote sensing applications, which needs a priori knowledge of aerosol type to model the retrieved properties especially over semi-bright surfaces. In fact, this analysis reveals that the aerosol types occurred in mixtures with varying magnitudes of absorption and requires the use of more than one assumed aerosol mixture model. Furthermore, this technique will provide the aerosol transport model community a data set for validating aerosol type.

  20. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions

    PubMed Central

    Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.

    2009-01-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  1. Using Factor Mixture Models to Evaluate the Type A/B Classification of Alcohol Use Disorders in a Heterogeneous Treatment Sample

    PubMed Central

    Hildebrandt, Tom; Epstein, Elizabeth E.; Sysko, Robyn; Bux, Donald A.

    2017-01-01

    Background The type A/B classification model for alcohol use disorders (AUDs) has received considerable empirical support. However, few studies examine the underlying latent structure of this subtyping model, which has been challenged as a dichotomization of a single drinking severity dimension. Type B, relative to type A, alcoholics represent those with early age of onset, greater familial risk, and worse outcomes from alcohol use. Method We examined the latent structure of the type A/B model using categorical, dimensional, and factor mixture models in a mixed gender community treatment-seeking sample of adults with an AUD. Results Factor analytic models identified 2-factors (drinking severity/externalizing psychopathology and internalizing psychopathology) underlying the type A/B indicators. A factor mixture model with 2-dimensions and 3-classes emerged as the best overall fitting model. The classes reflected a type A class and two type B classes (B1 and B2) that differed on the respective level of drinking severity/externalizing pathology and internalizing pathology. Type B1 had a greater prevalence of women and more internalizing pathology and B2 had a greater prevalence of men and more drinking severity/externalizing pathology. The 2-factor, 3-class model also exhibited predictive validity by explaining significant variance in 12-month drinking and drug use outcomes. Conclusions The model identified in the current study may provide a basis for examining different sources of heterogeneity in the course and outcome of AUDs. PMID:28247423

  2. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  3. Deposition efficiency optimization in cold spraying of metal-ceramic powder mixtures

    NASA Astrophysics Data System (ADS)

    Klinkov, S. V.; Kosarev, V. F.

    2017-10-01

    In the present paper, results of optimization of the cold spray deposition process of a metal-ceramic powder mixture involving impacts of ceramic particles onto coating surface are reported. In the optimization study, a two-probability model was used to take into account the surface activation induced by the ceramic component of the mixture. The dependence of mixture deposition efficiency on the concentration and size of ceramic particles was analysed to identify the ranges of both parameters in which the effect due to ceramic particles on the mixture deposition efficiency was positive. The dependences of the optimum size and concentration of ceramic particles, and also the maximum gain in deposition efficiency, on the probability of adhesion of metal particles to non-activated coating surface were obtained.

  4. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  5. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  6. Analytic Complexity and Challenges in Identifying Mixtures of Exposures Associated with Phenotypes in the Exposome Era.

    PubMed

    Patel, Chirag J

    2017-01-01

    Mixtures, or combinations and interactions between multiple environmental exposures, are hypothesized to be causally linked with disease and health-related phenotypes. Established and emerging molecular measurement technologies to assay the exposome , the comprehensive battery of exposures encountered from birth to death, promise a new way of identifying mixtures in disease in the epidemiological setting. In this opinion, we describe the analytic complexity and challenges in identifying mixtures associated with phenotype and disease. Existing and emerging machine-learning methods and data analytic approaches (e.g., "environment-wide association studies" [EWASs]), as well as large cohorts may enhance possibilities to identify mixtures of correlated exposures associated with phenotypes; however, the analytic complexity of identifying mixtures is immense. If the exposome concept is realized, new analytical methods and large sample sizes will be required to ascertain how mixtures are associated with disease. The author recommends documenting prevalent correlated exposures and replicated main effects prior to identifying mixtures.

  7. Thermal Infrared Spectroscopy and Modeled Mineralogy of Fine-Grained Mineral Mixtures: Implications for Martian Surface Mineralogy

    NASA Astrophysics Data System (ADS)

    Rampe, E. B.; Kraft, M. D.; Sharp, T. G.; Michalski, J. R.

    2006-12-01

    Spectral data suggest that the Martian surface may be chemically altered. However, TES data show evidence for abundant primary glass, and Mini-TES data from MER Spirit in the Columbia Hills identify primary basaltic glass in rocks that are believed to be altered (Haskin et al., 2005, Ming et al., 2006, Wang et al., 2006). Debate over whether the primary glass identified spectrally may be interpreted as alteration products, such as clay minerals and/or amorphous silica coatings (Wyatt and McSween, 2002, Kraft et al., 2003), has focused on their spectral similarities (Koeppen and Hamilton, 2005). We suggest that some of the putative primary glass may be due to nonlinear spectral mixing of primary and secondary phases. We created physical mixtures made up of a primary phase (augite, andesine, or a 50:50 weight percent mixture of augite and andesine) and a secondary phase (montmorillonite clay or amorphous silica in 2.5, 5, 10, and 20 weight percent abundances) to test how secondary phases affect primary mineral thermal infrared spectra and modeled mineralogies. We found that the presence of small to moderate amounts of secondary material strongly affect modeled mineralogies, cause the false identification of primary glass in abundances as high as 40 volume percent, and report modeled plagioclase to pyroxene ratios that differ from actual ratios in the mixtures. These results are important for the surface mineralogy of Mars because surface type two (ST2), which may be altered, has the highest modeled plagioclase to pyroxene ratio. The presence of alteration material on Mars may cause the false identification or overestimation of primary glass in TES and Mini-TES data and may cause incorrect modeling of primary phases on Mars.

  8. MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)

    EPA Science Inventory

    Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...

  9. Delineating Cultural Models: Extending the Cultural Mixture Model

    DTIC Science & Technology

    2011-12-01

    identify appropriate therapies or interventions, and provide insight into their behaviors. Personality factors are typically developed through...us information on if they like radishes too. Perhaps there is another group of people who predominantly like chocolate ice cream and carrots

  10. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  11. Method optimization for drug impurity profiling in supercritical fluid chromatography: Application to a pharmaceutical mixture.

    PubMed

    Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan

    2017-12-01

    A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Combining Mixture Components for Clustering*

    PubMed Central

    Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël

    2010-01-01

    Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302

  13. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity

    USGS Publications Warehouse

    Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J.; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S.; Schmidt, Travis S.

    2018-01-01

    One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters.

  15. Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity.

    PubMed

    Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S; Schmidt, Travis S

    2018-04-01

    One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    PubMed

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  17. Spectrophotometry and organic matter on Iapetus. 1: Composition models

    NASA Technical Reports Server (NTRS)

    Wilson, Peter D.; Sagan, Carl

    1995-01-01

    Iapetus shows a greater hemispheric albedo asymmetry than any other body in the solar system. Hapke scattering theory and optical constants measured in the laboratory are used to identify possible compositions for the dark material on the leading hemisphere of Iapetus. The materials considered are poly-HCN, kerogen, Murchison organic residue, Titan tholin, ice tholin, and water ice. Three-component mixtures of these materials are modeled in intraparticle mixture of 25% poly-HCN, 10% Murchison residue, and 65% water ice is found to best fit the spectrum, albedo, and phase behavior of the dark material. The Murchison residue and/or water ice can be replaced by kerogen and ice tholin, respectively, and still produce very good fits. Areal and particle mixtures of poly-HCN, Titan tholin, and either ice tholin or Murchison residue are also possible models. Poly-HCN is a necessary component in almost all good models. The presence of poly-HCN can be further tested by high-resolution observations near 4.5 micrometers.

  18. Bayesian Variable Selection for Hierarchical Gene-Environment and Gene-Gene Interactions

    PubMed Central

    Liu, Changlu; Ma, Jianzhong; Amos, Christopher I.

    2014-01-01

    We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions and gene by environment interactions in the same model. Our approach incorporates the natural hierarchical structure between the main effects and interaction effects into a mixture model, such that our methods tend to remove the irrelevant interaction effects more effectively, resulting in more robust and parsimonious models. We consider both strong and weak hierarchical models. For a strong hierarchical model, both of the main effects between interacting factors must be present for the interactions to be considered in the model development, while for a weak hierarchical model, only one of the two main effects is required to be present for the interaction to be evaluated. Our simulation results show that the proposed strong and weak hierarchical mixture models work well in controlling false positive rates and provide a powerful approach for identifying the predisposing effects and interactions in gene-environment interaction studies, in comparison with the naive model that does not impose this hierarchical constraint in most of the scenarios simulated. We illustrated our approach using data for lung cancer and cutaneous melanoma. PMID:25154630

  19. Pattern Recognition Algorithm for High-Sensitivity Odorant Detection in Unknown Environments

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2012-01-01

    In a realistic odorant detection application environment, the collected sensory data is a mix of unknown chemicals with unknown concentrations and noise. The identification of the odorants among these mixtures is a challenge in data recognition. In addition, deriving their individual concentrations in the mix is also a challenge. A deterministic analytical model was developed to accurately identify odorants and calculate their concentrations in a mixture with noisy data.

  20. Heterogeneity in the Latent Structure of PTSD Symptoms among Canadian Veterans

    ERIC Educational Resources Information Center

    Naifeh, James A.; Richardson, J. Don; Del Ben, Kevin S.; Elhai, Jon D.

    2010-01-01

    The current study used factor mixture modeling to identify heterogeneity (i.e., latent classes) in 2 well-supported models of posttraumatic stress disorder's (PTSD) factor structure. Data were analyzed from a clinical sample of 405 Canadian veterans evaluated for PTSD. Results were consistent with our hypotheses. Each PTSD factor model was best…

  1. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  2. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  3. A two-component Bayesian mixture model to identify implausible gestational age.

    PubMed

    Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz

    2016-01-01

    Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.

  4. Multi-species detection using multi-mode absorption spectroscopy (MUMAS)

    NASA Astrophysics Data System (ADS)

    Northern, J. H.; Thompson, A. W. J.; Hamilton, M. L.; Ewart, P.

    2013-06-01

    The detection of multiple species using a single laser and single detector employing multi-mode absorption spectroscopy (MUMAS) is reported. An in-house constructed, diode-pumped, Er:Yb:glass micro-laser operating at 1,565 nm with 10 modes separated by 18 GHz was used to record MUMAS signals in a gas mixture containing C2H2, N2O and CO. The components of the mixture were detected simultaneously by identifying multiple transitions in each of the species. By using temperature- and pressure-dependent modelled spectral fits to the data, partial pressures of each species in the mixture were determined with an uncertainty of ±2 %.

  5. Experimental and modeling study on effects of N2 and CO2 on ignition characteristics of methane/air mixture

    PubMed Central

    Zeng, Wen; Ma, Hongan; Liang, Yuntao; Hu, Erjiang

    2014-01-01

    The ignition delay times of methane/air mixture diluted by N2 and CO2 were experimentally measured in a chemical shock tube. The experiments were performed over the temperature range of 1300–2100 K, pressure range of 0.1–1.0 MPa, equivalence ratio range of 0.5–2.0 and for the dilution coefficients of 0%, 20% and 50%. The results suggest that a linear relationship exists between the reciprocal of temperature and the logarithm of the ignition delay times. Meanwhile, with ignition temperature and pressure increasing, the measured ignition delay times of methane/air mixture are decreasing. Furthermore, an increase in the dilution coefficient of N2 or CO2 results in increasing ignition delays and the inhibition effect of CO2 on methane/air mixture ignition is stronger than that of N2. Simulated ignition delays of methane/air mixture using three kinetic models were compared to the experimental data. Results show that GRI_3.0 mechanism gives the best prediction on ignition delays of methane/air mixture and it was selected to identify the effects of N2 and CO2 on ignition delays and the key elementary reactions in the ignition chemistry of methane/air mixture. Comparisons of the calculated ignition delays with the experimental data of methane/air mixture diluted by N2 and CO2 show excellent agreement, and sensitivity coefficients of chain branching reactions which promote mixture ignition decrease with increasing dilution coefficient of N2 or CO2. PMID:25750753

  6. Development of Kinetics and Mathematical Models for High-Pressure Gasification of Lignite-Switchgrass Blends: Cooperative Research and Development Final Report, CRADA Number CRD-11-447

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iisa, Kristiina

    2016-04-06

    NREL will work with Participant as a subtier partner under DE-FOA-0000240 titled "Co-Production of Power, Fuels, and Chemicals via Coal/Biomass Mixtures." The goal of the project is to determine the gasification characteristics of switchgrass and lignite mixtures and develop kinetic models. NREL will utilize a pressurized thermogravimetric analyzer to measure the reactivity of chars generated in a pressurized entrained-flow reactor at Participant's facilities and to determine the evolution of gaseous species during pyrolysis of switchgrass-lignite mixtures. Mass spectrometry and Fourier-transform infrared analysis will be used to identify and quantify the gaseous species. The results of the project will aid inmore » defining key reactive properties of mixed coal biomass fuels.« less

  7. Advanced oxidation of commercial herbicides mixture: experimental design and phytotoxicity evaluation.

    PubMed

    López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina

    2017-05-05

    In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.

  8. Modeling sports highlights using a time-series clustering framework and model interpretation

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay

    2005-01-01

    In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.

  9. Characterization of low-temperature properties of plant-produced rap mixtures in the Northeast

    NASA Astrophysics Data System (ADS)

    Medeiros, Marcelo S., Junior

    The dissertation outlined herein results from a Federal Highway Administration sponsored project intended to investigate the impacts of high percentages of RAP material in the performance of pavements under cold climate conditions. It is comprised of two main sections that were incorporated into the body of this dissertation as Part I and Part II. In Part I a reduced testing framework for analysis of HMA mixes was proposed to replace the IDT creep compliance and strength testing by dynamic modulus and fatigue tests performed on an AMPT device. A continuum damage model that incorporates the nonlinear constitutive behavior of the HMA mixtures was also successfully implemented and validated. Mixtures with varying percentages of reclaimed material (RAP) ranging from 0 to 40% were used in this research effort in order to verify the applicability of the proposed methodology to RAP mixtures. Part II is concerned with evaluating the effects of various binder grades on the properties of plant-produced mixtures with various percentages of RAP. The effects of RAP on mechanical and rheological properties of mixtures and extracted binders were studied in order to identify some of the deficiencies in the current production methodologies. The results of this dissertation will help practitioners to identify optimal RAP usage from a material property perspective. It also establishes some guidelines and best practices for the use of higher RAP percentages in HMA.

  10. Employment Trajectories: Exploring Gender Differences and Impacts of Drug Use

    ERIC Educational Resources Information Center

    Huang, David Y. C.; Evans, Elizabeth; Hara, Motoaki; Weiss, Robert E.; Hser, Yih-Ing

    2011-01-01

    This study investigated the impact of drug use on employment over 20 years among men and women, utilizing data on 7661 participants in the National Longitudinal Survey of Youth. Growth mixture modeling was applied, and five distinct employment trajectory groups were identified for both men and women. The identified patterns were largely similar…

  11. Component spectra extraction from terahertz measurements of unknown mixtures.

    PubMed

    Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X

    2015-10-20

    The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.

  12. Endocrine disrupting chemicals in mixture and obesity, diabetes and related metabolic disorders

    PubMed Central

    Le Magueresse-Battistoni, Brigitte; Labaronne, Emmanuel; Vidal, Hubert; Naville, Danielle

    2017-01-01

    Obesity and associated metabolic disorders represent a major societal challenge in health and quality of life with large psychological consequences in addition to physical disabilities. They are also one of the leading causes of morbidity and mortality. Although, different etiologic factors including excessive food intake and reduced physical activity have been well identified, they cannot explain the kinetics of epidemic evolution of obesity and diabetes with prevalence rates reaching pandemic proportions. Interestingly, convincing data have shown that environmental pollutants, specifically those endowed with endocrine disrupting activities, could contribute to the etiology of these multifactorial metabolic disorders. Within this review, we will recapitulate characteristics of endocrine disruption. We will demonstrate that metabolic disorders could originate from endocrine disruption with a particular focus on convincing data from the literature. Eventually, we will present how handling an original mouse model of chronic exposition to a mixture of pollutants allowed demonstrating that a mixture of pollutants each at doses beyond their active dose could induce substantial deleterious effects on several metabolic end-points. This proof-of-concept study, as well as other studies on mixtures of pollutants, stresses the needs for revisiting the current threshold model used in risk assessment which does not take into account potential effects of mixtures containing pollutants at environmental doses, e.g., the real life exposure. Certainly, more studies are necessary to better determine the nature of the chemicals to which humans are exposed and at which level, and their health impact. As well, research studies on substitute products are essential to identify harmless molecules. PMID:28588754

  13. Trajectories of adolescent substance use development and the influence of healthy leisure: A growth mixture modeling approach.

    PubMed

    Weybright, Elizabeth H; Caldwell, Linda L; Ram, Nilam; Smith, Edward A; Wegner, Lisa

    2016-06-01

    Considerable heterogeneity exists in adolescent substance use development. To most effectively prevent use, distinct trajectories of use must be identified as well as differential associations with predictors of use, such as leisure experience. The current study used a person-centered approach to identify distinct substance use trajectories and how leisure is associated with trajectory classes. Data came from a larger efficacy trial of 2.249 South African high school students who reported substance use at any time across 8 waves. Growth mixture modeling was used to identify developmental trajectories of substance use and the influence of healthy leisure. Results identified three increasing and one stable substance use trajectory and subjective healthy leisure served to protect against use. This study is the first of its kind to focus on a sample of South African adolescents and serves to develop a richer understanding of substance use development and the role of healthy leisure. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  14. A Mathematical Model of the Olfactory Bulb for the Selective Adaptation Mechanism in the Rodent Olfactory System.

    PubMed

    Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio

    2016-01-01

    To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.

  15. Extraction and identification of mixed pesticides’ Raman signal and establishment of their prediction models

    USDA-ARS?s Scientific Manuscript database

    A nondestructive and sensitive method was developed to detect the presence of mixed pesticides of acetamiprid, chlorpyrifos and carbendazim on apples by surface-enhanced Raman spectroscopy (SERS). Self-modeling mixture analysis (SMA) was used to extract and identify the Raman spectra of individual p...

  16. Application of Biologically Based Lumping To Investigate the Toxicokinetic Interactions of a Complex Gasoline Mixture.

    PubMed

    Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham

    2016-03-15

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.

  17. Simulation of toluene decomposition in a pulse-periodic discharge operating in a mixture of molecular nitrogen and oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trushkin, A. N.; Kochetov, I. V.

    The kinetic model of toluene decomposition in nonequilibrium low-temperature plasma generated by a pulse-periodic discharge operating in a mixture of nitrogen and oxygen is developed. The results of numerical simulation of plasma-chemical conversion of toluene are presented; the main processes responsible for C{sub 6}H{sub 5}CH{sub 3} decomposition are identified; the contribution of each process to total removal of toluene is determined; and the intermediate and final products of C{sub 6}H{sub 5}CH{sub 3} decomposition are identified. It was shown that toluene in pure nitrogen is mostly decomposed in its reactions with metastable N{sub 2}(A{sub 3}{Sigma}{sub u}{sup +}) and N{sub 2}(a Primemore » {sup 1}{Sigma}{sub u}{sup -}) molecules. In the presence of oxygen, in the N{sub 2} : O{sub 2} gas mixture, the largest contribution to C{sub 6}H{sub 5}CH{sub 3} removal is made by the hydroxyl radical OH which is generated in this mixture exclusively due to plasma-chemical reactions between toluene and oxygen decomposition products. Numerical simulation showed the existence of an optimum oxygen concentration in the mixture, at which toluene removal is maximum at a fixed energy deposition.« less

  18. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  19. Modular analysis of the probabilistic genetic interaction network.

    PubMed

    Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting

    2011-03-15

    Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.

  20. Enhancements to the caliop aerosol subtyping and lidar ratio selection algorithms for level II version 4

    NASA Astrophysics Data System (ADS)

    Omar, A.; Tackett, J.; Kim, M.-H.; Vaughan, M.; Kar, J.; Trepte, C.; Winker, D.

    2018-04-01

    Several enhancements have been implemented for the version 4 aerosol subtyping and lidar ratio selection algorithms of Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP). Version 4 eliminates the confusion between smoke and clean marine aerosols seen in version 3 by modifications to the elevated layer flag definitions used to identify smoke aerosols over the ocean. To differentiate between mixtures of dust and smoke, and dust and marine aerosols, a new aerosol type will be added in the version 4 data products. In the marine boundary layer, moderately depolarizing aerosols are no longer modeled as mixtures of dust and smoke (polluted dust) but rather as mixtures of dust and seasalt (dusty marine). Some lidar ratios have been updated in the version 4 algorithms. In particular, the dust lidar ratios have been adjusted to reflect the latest measurements and model studies.

  1. State of research: environmental pathways and food chain transfer.

    PubMed Central

    Vaughan, B E

    1984-01-01

    Data on the chemistry of biologically active components of petroleum, synthetic fuel oils, certain metal elements and pesticides provide valuable generic information needed for predicting the long-term fate of buried waste constituents and their likelihood of entering food chains. Components of such complex mixtures partition between solid and solution phases, influencing their mobility, volatility and susceptibility to microbial transformation. Estimating health hazards from indirect exposures to organic chemicals involves an ecosystem's approach to understanding the unique behavior of complex mixtures. Metabolism by microbial organisms fundamentally alters these complex mixtures as they move through food chains. Pathway modeling of organic chemicals must consider the nature and magnitude of food chain transfers to predict biological risk where metabolites may become more toxic than the parent compound. To obtain predictions, major areas are identified where data acquisition is essential to extend our radiological modeling experience to the field of organic chemical contamination. PMID:6428875

  2. Discrimination of biological and chemical threat simulants in residue mixtures on multiple substrates.

    PubMed

    Gottfried, Jennifer L

    2011-07-01

    The potential of laser-induced breakdown spectroscopy (LIBS) to discriminate biological and chemical threat simulant residues prepared on multiple substrates and in the presence of interferents has been explored. The simulant samples tested include Bacillus atrophaeus spores, Escherichia coli, MS-2 bacteriophage, α-hemolysin from Staphylococcus aureus, 2-chloroethyl ethyl sulfide, and dimethyl methylphosphonate. The residue samples were prepared on polycarbonate, stainless steel and aluminum foil substrates by Battelle Eastern Science and Technology Center. LIBS spectra were collected by Battelle on a portable LIBS instrument developed by A3 Technologies. This paper presents the chemometric analysis of the LIBS spectra using partial least-squares discriminant analysis (PLS-DA). The performance of PLS-DA models developed based on the full LIBS spectra, and selected emission intensities and ratios have been compared. The full-spectra models generally provided better classification results based on the inclusion of substrate emission features; however, the intensity/ratio models were able to correctly identify more types of simulant residues in the presence of interferents. The fusion of the two types of PLS-DA models resulted in a significant improvement in classification performance for models built using multiple substrates. In addition to identifying the major components of residue mixtures, minor components such as growth media and solvents can be identified with an appropriately designed PLS-DA model.

  3. Extending the Distributed Lag Model framework to handle chemical mixtures.

    PubMed

    Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris

    2017-07-01

    Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Behavioural and biochemical responses to metals tested alone or in mixture (Cd-Cu-Ni-Pb-Zn) in Gammarus fossarum: From a multi-biomarker approach to modelling metal mixture toxicity.

    PubMed

    Lebrun, Jérémie D; Uher, Emmanuelle; Fechner, Lise C

    2017-12-01

    Metals are usually present as mixtures at low concentrations in aquatic ecosystems. However, the toxicity and sub-lethal effects of metal mixtures on organisms are still poorly addressed in environmental risk assessment. Here we investigated the biochemical and behavioural responses of Gammarus fossarum to Cu, Cd, Ni, Pb and Zn tested individually or in mixture (M2X) at concentrations twice the levels of environmental quality standards (EQSs) from the European Water Framework Directive. The same metal mixture was also tested with concentrations equivalent to EQSs (M1X), thus in a regulatory context, as EQSs are proposed to protect aquatic biota. For each exposure condition, mortality, locomotion, respiration and enzymatic activities involved in digestive metabolism and moult were monitored over a 120h exposure period. Multi-metric variations were summarized by the integrated biomarker response index (IBR). Mono-metallic exposures shed light on biological alterations occurring at environmental exposure levels in gammarids and depending on the considered metal and gender. As regards mixtures, biomarkers were altered for both M2X and M1X. However, no additive or synergistic effect of metals was observed comparing to mono-metallic exposures. Indeed, bioaccumulation data highlighted competitive interactions between metals in M2X, decreasing subsequently their internalisation and toxicity. IBR values indicated that the health of gammarids was more impacted by M1X than M2X, because of reduced competitions and enhanced uptakes of metals for the mixture at lower, EQS-like concentrations. Models using bioconcentration data obtained from mono-metallic exposures generated successful predictions of global toxicity both for M1X and M2X. We conclude that sub-lethal effects of mixtures identified by the multi-biomarker approach can lead to disturbances in population dynamics of gammarids. Although IBR-based models offer promising lines of enquiry to predict metal mixture toxicity, further studies are needed to confirm their predictive quality on larger ranges of metallic combinations before their use in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Using Latent Class Analysis to Model Temperament Types.

    PubMed

    Loken, Eric

    2004-10-01

    Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.

  6. What is the "Clim-Likely" aerosol product?

    Atmospheric Science Data Center

    2014-12-08

    ... identifying a range of components and mixtures for the MISR Standard Aerosol Retrieval Algorithm climatology, and as one standard against ... retrieval results. Six component aerosols included in the model were medium and coarse mode mineral dust, sulfate, sea salt, black ...

  7. Admixture analysis of age at onset in first episode bipolar disorder.

    PubMed

    Nowrouzi, Behdin; McIntyre, Roger S; MacQueen, Glenda; Kennedy, Sidney H; Kennedy, James L; Ravindran, Arun; Yatham, Lakshmi; De Luca, Vincenzo

    2016-09-01

    Many studies have used the admixture analysis to separate age-at-onset (AAO) subgroups in bipolar disorder, but none of them examined first episode patients. The purpose of this study was to investigate the influence of clinical variables on AAO in first episode bipolar patients. The admixture analysis was applied to identify the model best fitting the observed AAO distribution of a sample of 194 patients with DSM-IV diagnosis of bipolar disorder and the finite mixture model was applied to assess the effect of clinical covariates on AAO. Using the BIC method, the model that was best fitting the observed distribution of AAO was a mixture of three normal distributions. We identified three AAO groups: early age-at-onset (EAO) (µ=18.0, σ=2.88), intermediate-age-at-onset (IAO) (µ=28.7, σ=3.5), and late-age-at-onset (LAO) (µ=47.3, σ=7.8), comprising 69%, 22%, and 9% of the sample respectively. Our first episode sample distribution model was significantly different from most of the other studies that applied the mixture analysis. The main limitation is that our sample may have inadequate statistical power to detect the clinical associations with the AAO subgroups. This study confirms that bipolar disorder can be classified into three groups based on AAO distribution. The data reported in our paper provide more insight into the diagnostic heterogeneity of bipolar disorder across the three AAO subgroups. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Optical Constants of Mars Candidate Materials used to Model Laboratory Reflectance Spectra of Mixtures

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Brown, Adrian Jon; Blake, D.; Bristow, T.

    2014-01-01

    Data obtained at visible and nearinfrared wavelengths by OMEGA on MarsExpress and CRISM on MRO provide definitive evidence for the presence of phyllosilicates and other hydrated phases on Mars. A diverse range of both Fe/Mg-OH and Al- OH-bearing phyllosilicates were identified including the smectites nontronite, saponite, and montmorillonite. To constrain the abundances of these phyllosilicates, spectral analyses of mixtures are needed. We report on our effort to enable the quantitative evaluation of the abundance of hydrated-hydroxylated silicates when they are contained in mixtures. Here we focus on two component mixtures of the hydrated/ hydroxylated silicates, saponite and montmorillonite (Mg- and Al-rich smectites) with each other and with two analogs for other Martian materials; pyroxene (enstatite) and palagonitic soil (an alteration product of basaltic glass, hereafter referred to as palagonite). We prepared three size separates of each end-member for study: 20-45, 63-90, and 125-150 micron. Here we focus upon mixtures of the 63-90 m size fractions.

  9. Systematic Proteomic Approach to Characterize the Impacts of ...

    EPA Pesticide Factsheets

    Chemical interactions have posed a big challenge in toxicity characterization and human health risk assessment of environmental mixtures. To characterize the impacts of chemical interactions on protein and cytotoxicity responses to environmental mixtures, we established a systems biology approach integrating proteomics, bioinformatics, statistics, and computational toxicology to measure expression or phosphorylation levels of 21 critical toxicity pathway regulators and 445 downstream proteins in human BEAS-28 cells treated with 4 concentrations of nickel, 2 concentrations each of cadmium and chromium, as well as 12 defined binary and 8 defined ternary mixtures of these metals in vitro. Multivariate statistical analysis and mathematical modeling of the metal-mediated proteomic response patterns showed a high correlation between changes in protein expression or phosphorylation and cellular toxic responses to both individual metals and metal mixtures. Of the identified correlated proteins, only a small set of proteins including HIF-1a is likely to be responsible for selective cytotoxic responses to different metals and metals mixtures. Furthermore, support vector machine learning was utilized to computationally predict protein responses to uncharacterized metal mixtures using experimentally generated protein response profiles corresponding to known metal mixtures. This study provides a novel proteomic approach for characterization and prediction of toxicities of

  10. Discriminant analysis of fused positive and negative ion mobility spectra using multivariate self-modeling mixture analysis and neural networks.

    PubMed

    Chen, Ping; Harrington, Peter B

    2008-02-01

    A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).

  11. NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.

    PubMed

    Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow

    2018-06-01

    DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. The Association of Latino Children's Kindergarten School Readiness Profiles with Grade 2-5 Literacy Achievement Trajectories

    ERIC Educational Resources Information Center

    Quirk, Matthew; Grimm, Ryan; Furlong, Michael J.; Nylund-Gibson, Karen; Swami, Sruthi

    2016-01-01

    This study utilized latent class analysis (LCA) to identify 5 discernible profiles of Latino children's (N = 1,253) social-emotional, physical, and cognitive school readiness at the time of kindergarten entry. In addition, a growth mixture modeling (GMM) approach was used to identify 3 unique literacy achievement trajectories, across Grades 2-5,…

  13. Transformation and model choice for RNA-seq co-expression analysis.

    PubMed

    Rau, Andrea; Maugis-Rabusseau, Cathy

    2018-05-01

    Although a large number of clustering algorithms have been proposed to identify groups of co-expressed genes from microarray data, the question of if and how such methods may be applied to RNA sequencing (RNA-seq) data remains unaddressed. In this work, we investigate the use of data transformations in conjunction with Gaussian mixture models for RNA-seq co-expression analyses, as well as a penalized model selection criterion to select both an appropriate transformation and number of clusters present in the data. This approach has the advantage of accounting for per-cluster correlation structures among samples, which can be strong in RNA-seq data. In addition, it provides a rigorous statistical framework for parameter estimation, an objective assessment of data transformations and number of clusters and the possibility of performing diagnostic checks on the quality and homogeneity of the identified clusters. We analyze four varied RNA-seq data sets to illustrate the use of transformations and model selection in conjunction with Gaussian mixture models. Finally, we propose a Bioconductor package coseq (co-expression of RNA-seq data) to facilitate implementation and visualization of the recommended RNA-seq co-expression analyses.

  14. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    USGS Publications Warehouse

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.

  15. Application of Biologically-Based Lumping To Investigate the ...

    EPA Pesticide Factsheets

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic

  16. Investigation of Profiles of Risk Factors for Adolescent Psychopathology: A Person-Centered Approach

    ERIC Educational Resources Information Center

    Parra, Gilbert R.; DuBois, David L.; Sher, Kenneth J.

    2006-01-01

    Latent variable mixture modeling was used to identify subgroups of adolescents with distinct profiles of risk factors from individual, family, peer, and broader contextual domains. Data were drawn from the National Longitudinal Study of Adolescent Health. Four-class models provided the most theoretically meaningful solutions for both 7th (n = 907;…

  17. A Just-in-Time Learning based Monitoring and Classification Method for Hyper/Hypocalcemia Diagnosis.

    PubMed

    Peng, Xin; Tang, Yang; He, Wangli; Du, Wenli; Qian, Feng

    2017-01-20

    This study focuses on the classification and pathological status monitoring of hyper/hypo-calcemia in the calcium regulatory system. By utilizing the Independent Component Analysis (ICA) mixture model, samples from healthy patients are collected, diagnosed, and subsequently classified according to their underlying behaviors, characteristics, and mechanisms. Then, a Just-in-Time Learning (JITL) has been employed in order to estimate the diseased status dynamically. In terms of JITL, for the purpose of the construction of an appropriate similarity index to identify relevant datasets, a novel similarity index based on the ICA mixture model is proposed in this paper to improve online model quality. The validity and effectiveness of the proposed approach have been demonstrated by applying it to the calcium regulatory system under various hypocalcemic and hypercalcemic diseased conditions.

  18. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  19. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  20. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  1. Bio-remediation of colored industrial wastewaters by the white-rot fungi Phanerochaete chrysosporium and Pleurotus ostreatus and their enzymes.

    PubMed

    Faraco, V; Pezzella, C; Miele, A; Giardina, P; Sannia, G

    2009-04-01

    The effect of Phanerochaete chrysosporium and Pleurotus ostreatus whole cells and their ligninolytic enzymes on models of colored industrial wastewaters was evaluated. Models of acid, direct and reactive dye wastewaters from textile industry have been defined on the basis of discharged amounts, economic relevance and representativeness of chemical structures of the contained dyes. Phanerochaete chrysosporium provided an effective decolourization of direct dye wastewater model, reaching about 45% decolourization in only 1 day of treatment, and about 90% decolourization within 7 days, whilst P. ostreatus was able to decolorize and detoxify acid dye wastewater model providing 40% decolourization in only 1 day, and 60% in 7 days. P. ostreatus growth conditions that induce laccase production (up to 130,000 U/l) were identified, and extra-cellular enzyme mixtures, with known laccase isoenzyme composition, were produced and used in wastewater models decolourization. The mixtures decolorized and detoxified the acid dye wastewater model, suggesting laccases as the main agents of wastewater decolourization by P. ostreatus. A laccase mixture was immobilized by entrapment in Cu-alginate beads, and the immobilized enzymes were shown to be effective in batch decolourization, even after 15 stepwise additions of dye for a total exposure of about 1 month.

  2. The influence of different loads on the remodeling process of a bone and bioresorbable material mixture with voids

    NASA Astrophysics Data System (ADS)

    Giorgio, Ivan; Andreaus, Ugo; Madeo, Angela

    2016-03-01

    A model of a mixture of bone tissue and bioresorbable material with voids was used to numerically analyze the physiological balance between the processes of bone growth and resorption and artificial material resorption in a plate-like sample. The adopted model was derived from a theory for the behavior of porous solids in which the matrix material is linearly elastic and the interstices are void of material. The specimen—constituted by a region of bone living tissue and one of bioresorbable material—was acted by different in-plane loading conditions, namely pure bending and shear. Ranges of load magnitudes were identified within which physiological states become possible. Furthermore, the consequences of applying different loading conditions are examined at the end of the remodeling process. In particular, maximum value of bone and material mass densities, and extensions of the zones where bone is reconstructed were identified and compared in the two different load conditions. From the practical view point, during surgery planning and later rehabilitation, some choice of the following parameters is given: porosity of the graft, material characteristics of the graft, and adjustment of initial mixture tissue/bioresorbable material and later, during healing and remodeling, optimal loading conditions.

  3. A Mixture Approach to Vagueness and Ambiguity

    PubMed Central

    Verheyen, Steven; Storms, Gert

    2013-01-01

    When asked to indicate which items from a set of candidates belong to a particular natural language category inter-individual differences occur: Individuals disagree which items should be considered category members. The premise of this paper is that these inter-individual differences in semantic categorization reflect both ambiguity and vagueness. Categorization differences are said to be due to ambiguity when individuals employ different criteria for categorization. For instance, individuals may disagree whether hiking or darts is the better example of sports because they emphasize respectively whether an activity is strenuous and whether rules apply. Categorization differences are said to be due to vagueness when individuals employ different cut-offs for separating members from non-members. For instance, the decision to include hiking in the sports category or not, may hinge on how strenuous different individuals require sports to be. This claim is supported by the application of a mixture model to categorization data for eight natural language categories. The mixture model can identify latent groups of categorizers who regard different items likely category members (i.e., ambiguity) with categorizers within each of the groups differing in their propensity to provide membership responses (i.e., vagueness). The identified subgroups are shown to emphasize different sets of category attributes when making their categorization decisions. PMID:23667627

  4. Soil mixture composition alters Arabidopsis susceptibility to Pseudomonas syringae infection

    USDA-ARS?s Scientific Manuscript database

    Pseudomonas syringae is a Gram-negative bacterial pathogen that causes disease on more than 100 different plant species, including the model plant Arabidopsis thaliana. Dissection of the Arabidopsis thaliana-Pseudomonas syringae pathosystem has identified many factors that contribute to successful ...

  5. Indentifying Latent Classes and Testing Their Determinants in Early Adolescents' Use of Computers and Internet for Learning

    ERIC Educational Resources Information Center

    Heo, Gyun

    2013-01-01

    The purpose of the present study was to identify latent classes resting on early adolescents' change trajectory patterns in using computers and the Internet for learning and to test the effects of gender, self-control, self-esteem, and game use in South Korea. Latent growth mixture modeling (LGMM) was used to identify subpopulations in the Korea…

  6. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals

    PubMed Central

    Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126

  7. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals.

    PubMed

    Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.

  8. Lubrication model for evaporation of binary sessile drops

    NASA Astrophysics Data System (ADS)

    Williams, Adam; Sáenz, Pedro; Karapetsas, George; Matar, Omar; Sefiane, Khellil; Valluri, Prashant

    2017-11-01

    Evaporation of a binary mixture sessile drop from a solid substrate is a highly dynamic and complex process with flow driven both thermal and solutal Marangoni stresses. Experiments on ethanol/water drops have identified chaotic regimes on both the surface and interior of the droplet, while mixture composition has also been seen to govern drop wettability. Using a lubrication-type approach, we present a finite element model for the evaporation of an axisymmetric binary drop deposited on a heated substrate. We consider a thin drop with a moving contact line, taking also into account the commonly ignored effects of inertia which drives interfacial instability. We derive evolution equations for the film height, the temperature and the concentration field considering that the mixture comprises two ideally mixed volatile components with a surface tension linearly dependent on both temperature and concentration. The properties of the mixture such as viscosity also vary locally with concentration. We explore the parameter space to examine the resultant effects on wetting and evaporation where we find qualitative agreement with experiments in both these areas. This enables us to understand the nature of the instabilities that spontaneously emerge over the drop lifetime. EPSRC - EP/K00963X/1.

  9. Modern Methods for Modeling Change in Obesity Research in Nursing.

    PubMed

    Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E

    2017-08-01

    Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.

  10. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  11. Modeling Growth in Boys' Aggressive Behavior across Elementary School: Links to Later Criminal Involvement, Conduct Disorder, and Antisocial Personality Disorder

    ERIC Educational Resources Information Center

    Schaeffer, Cindy M.; Petras, Hanno; Ialongo, Nicholas; Poduska, Jeanne; Kellam, Sheppard

    2003-01-01

    The present study used general growth mixture modeling to identify pathways of antisocial behavior development within an epidemiological sample of urban, primarily African American boys. Teacher-rated aggression, measured longitudinally from 1st to 7th grade, was used to define growth trajectories. Three high-risk trajectories (chronic high,…

  12. Identification of Anxiety Sensitivity Classes and Clinical Cut-Scores in a Sample of Adult Smokers: Results from a Factor Mixture Model

    PubMed Central

    Allan, Nicholas P.; Raines, Amanda M.; Capron, Daniel W.; Norr, Aaron M.; Zvolensky, Michael J.; Schmidt, Norman B.

    2014-01-01

    Anxiety sensitivity (AS), a multidimensional construct, has been implicated in the development and maintenance of anxiety and related disorders. Recent evidence suggests that AS is a dimensional-categorical construct within individuals. Factor mixture modeling was conducted in a sample of 579 adult smokers (M age = 36.87 years, SD = 13.47) to examine the underlying structure. Participants completed the Anxiety Sensitivity Index-3 and were also given a Structured Clinical Interview for DSM-IV-TR. Three classes of individuals emerged, a high AS (5.2% of the sample), a moderate AS (19.0%), and a normative AS class (75.8%). A cut-score of 23 to identify high AS individuals, and a cut-score of 17 to identify moderate-to-high AS individuals were supported in this study. In addition, the odds of having a concurrent anxiety disorder (controlling for other Axis I disorders) were the highest in the high AS class and the lowest in the normative AS class. PMID:25128664

  13. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  14. Direct Phenotypic Screening in Mice: Identification of Individual, Novel Antinociceptive Compounds from a Library of 734,821 Pyrrolidine Bis-piperazines.

    PubMed

    Houghten, Richard A; Ganno, Michelle L; McLaughlin, Jay P; Dooley, Colette T; Eans, Shainnel O; Santos, Radleigh G; LaVoi, Travis; Nefzi, Adel; Welmaker, Greg; Giulianotti, Marc A; Toll, Lawrence

    2016-01-11

    The hypothesis in the current study is that the simultaneous direct in vivo testing of thousands to millions of systematically arranged mixture-based libraries will facilitate the identification of enhanced individual compounds. Individual compounds identified from such libraries may have increased specificity and decreased side effects early in the discovery phase. Testing began by screening ten diverse scaffolds as single mixtures (ranging from 17,340 to 4,879,681 compounds) for analgesia directly in the mouse tail withdrawal model. The "all X" mixture representing the library TPI-1954 was found to produce significant antinociception and lacked respiratory depression and hyperlocomotor effects using the Comprehensive Laboratory Animal Monitoring System (CLAMS). The TPI-1954 library is a pyrrolidine bis-piperazine and totals 738,192 compounds. This library has 26 functionalities at the first three positions of diversity made up of 28,392 compounds each (26 × 26 × 42) and 42 functionalities at the fourth made up of 19,915 compounds each (26 × 26 × 26). The 120 resulting mixtures representing each of the variable four positions were screened directly in vivo in the mouse 55 °C warm-water tail-withdrawal assay (ip administration). The 120 samples were then ranked in terms of their antinociceptive activity. The synthesis of 54 individual compounds was then carried out. Nine of the individual compounds produced dose-dependent antinociception equivalent to morphine. In practical terms what this means is that one would not expect multiexponential increases in activity as we move from the all-X mixture, to the positional scanning libraries, to the individual compounds. Actually because of the systematic formatting one would typically anticipate steady increases in activity as the complexity of the mixtures is reduced. This is in fact what we see in the current study. One of the final individual compounds identified, TPI 2213-17, lacked significant respiratory depression, locomotor impairment, or sedation. Our results represent an example of this unique approach for screening large mixture-based libraries directly in vivo to rapidly identify individual compounds.

  15. Potential Chemical Effects of Changes in the Source of Water Supply for the Albuquerque Bernalillo County Water Utility Authority

    USGS Publications Warehouse

    Bexfield, Laura M.; Anderholm, Scott K.

    2008-01-01

    Chemical modeling was used by the U.S. Geological Survey, in cooperation with the Albuquerque Bernalillo County Water Utility Authority (henceforth, Authority), to gain insight into the potential chemical effects that could occur in the Authority's water distribution system as a result of changing the source of water used for municipal and industrial supply from ground water to surface water, or to some mixture of the two sources. From historical data, representative samples of ground-water and surface-water chemistry were selected for modeling under a range of environmental conditions anticipated to be present in the distribution system. Mineral phases calculated to have the potential to precipitate from ground water were compared with the compositions of precipitate samples collected from the current water distribution system and with mineral phases calculated to have the potential to precipitate from surface water and ground-water/surface-water mixtures. Several minerals that were calculated to have the potential to precipitate from ground water in the current distribution system were identified in precipitate samples from pipes, reservoirs, and water heaters. These minerals were the calcium carbonates aragonite and calcite, and the iron oxides/hydroxides goethite, hematite, and lepidocrocite. Several other minerals that were indicated by modeling to have the potential to precipitate were not found in precipitate samples. For most of these minerals, either the kinetics of formation were known to be unfavorable under conditions present in the distribution system or the minerals typically are not formed through direct precipitation from aqueous solutions. The minerals with potential to precipitate as simulated for surface-water samples and ground-water/surface-water mixtures were quite similar to the minerals with potential to precipitate from ground-water samples. Based on the modeling results along with kinetic considerations, minerals that appear most likely to either dissolve or newly precipitate when surface water or ground-water/surface-water mixtures are delivered through the Authority's current distribution system are carbonates (particularly aragonite and calcite). Other types of minerals having the potential to dissolve or newly precipitate under conditions present throughout most of the distribution system include a form of silica, an aluminum hyroxide (gibbsite or diaspore), or the Fe-containing mineral Fe3(OH)8. Dissolution of most of these minerals (except perhaps the Fe-containing minerals) is not likely to substantially affect trace-element concentrations or aesthetic characteristics of delivered water, except perhaps hardness. Precipitation of these minerals would probably be of concern only if the quantities of material involved were large enough to clog pipes or fixtures. The mineral Fe3(OH)8 was not found in the current distribution system. Some Fe-containing minerals that were identified in the distribution system were associated with relatively high contents of selected elements, including As, Cr, Cu, Mn, Pb, and Zn. However, these Fe-containing minerals were not identified as minerals likely to dissolve when the source of water was changed from ground water to surface water or a ground-water/surface-water mixture. Based on the modeled potential for calcite precipitation and additional calculations of corrosion indices ground water, surface water, and ground-water/surface-water mixtures are not likely to differ greatly in corrosion potential. In particular, surface water and ground-water/surface-water mixtures do not appear likely to dissolve large quantities of existing calcite and expose metal surfaces in the distribution system to substantially increased corrosion. Instead, modeling calculations indicate that somewhat larger masses of material would tend to precipitate from surface water or ground-water/surface-water mixtures compared to ground water alone.

  16. Flavor Identification and Intensity: Effects of Stimulus Context

    PubMed Central

    Hallowell, Emily S.; Parikh, Roshan; Veldhuizen, Maria G.

    2016-01-01

    Two experiments presented oral mixtures containing different proportions of the gustatory flavorant sucrose and an olfactory flavorant, either citral (Experiment 1) or lemon (Experiment 2). In 4 different sessions of each experiment, subjects identified each mixture as “mostly sugar” or “mostly citrus/lemon” or rated the perceived intensities of the sweet and citrus components. Different sessions also presented the mixtures in different contexts, with mixtures containing relatively high concentrations of sucrose or citral/lemon presented more often (skew sucrose or skew citral/lemon). As expected, in both experiments, varying stimulus context affected both identification and perceived intensity: Skewing to sucrose versus citral/lemon decreased the probability of identifying the stimuli as “mostly sugar” and reduced the ratings of sweet intensity relative to citrus intensity. Across both contextual conditions of both experiments, flavor identification associated closely with the ratio of the perceived sweet and citrus intensities. The results accord with a model, extrapolated from signal-detection theory, in which sensory events are represented as multisensory–multidimensional distributions in perceptual space. Changing stimulus context can shift the locations of the distributions relative to response criteria, Decision rules guide judgments based on both sensory events and criteria, these rules not necessarily being identical in tasks of identification and intensity rating. PMID:26830499

  17. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Modeling and multi-response optimization of pervaporation of organic aqueous solutions using desirability function approach.

    PubMed

    Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A

    2009-08-15

    The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.

  19. Flow of variably fluidized granular masses across three-dimensional terrain I. Coulomb mixture theory

    USGS Publications Warehouse

    Iverson, R.M.; Denlinger, R.P.

    2001-01-01

    Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces, govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, threedimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.

  20. Flow of variably fluidized granular masses across three-dimensional terrain: 1. Coulomb mixture theory

    NASA Astrophysics Data System (ADS)

    Iverson, Richard M.; Denlinger, Roger P.

    2001-01-01

    Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, three-dimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.

  1. Estimating Mixture of Gaussian Processes by Kernel Smoothing

    PubMed Central

    Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin

    2014-01-01

    When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675

  2. Co-digestion of solid waste: Towards a simple model to predict methane production.

    PubMed

    Kouas, Mokhles; Torrijos, Michel; Schmitz, Sabine; Sousbie, Philippe; Sayadi, Sami; Harmand, Jérôme

    2018-04-01

    Modeling methane production is a key issue for solid waste co-digestion. Here, the effect of a step-wise increase in the organic loading rate (OLR) on reactor performance was investigated, and four new models were evaluated to predict methane yields using data acquired in batch mode. Four co-digestion experiments of mixtures of 2 solid substrates were conducted in semi-continuous mode. Experimental methane yields were always higher than the BMP values of mixtures calculated from the BMP of each substrate, highlighting the importance of endogenous production (methane produced from auto-degradation of microbial community and generated solids). The experimental methane productions under increasing OLRs corresponded well to the modeled data using the model with constant endogenous production and kinetics identified at 80% from total batch time. This model provides a simple and useful tool for technical design consultancies and plant operators to optimize the co-digestion and the choice of the OLRs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  4. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  5. Use of Chemical Mixtures to Differentiate Mechanisms of Endocrine Action in a Small Fish Model

    EPA Science Inventory

    Various assays with adult fish have been developed to identify potential endocrine-disrupting chemicals (EDCs) which may cause toxicity via alterations in the hypothalamic-pituitary-gonadal (HPG) axis via different mechanisms/modes of action (MOA). These assays can be sensitive ...

  6. Trajectories of Social Withdrawal from Middle Childhood to Early Adolescence

    ERIC Educational Resources Information Center

    Oh, Wonjung; Rubin, Kenneth H.; Bowker, Julie C.; Booth-LaForce, Cathryn; Rose-Krasnor, Linda; Laursen, Brett

    2008-01-01

    Heterogeneity and individual differences in the developmental course of social withdrawal were examined longitudinally in a community sample (N = 392). General Growth Mixture Modeling (GGMM) was used to identify distinct pathways of social withdrawal, differentiate valid subgroup trajectories, and examine factors that predicted change in…

  7. Large eddy simulation of the low temperature ignition and combustion processes on spray flame with the linear eddy model

    NASA Astrophysics Data System (ADS)

    Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn

    2018-03-01

    Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer mixture regions. And after the spray flames gets quasi-steady, most heat is released at the stoichiometric mixture fraction regions. In addition, combustion mode analysis based on key intermediate species illustrates three-mode combustion processes in diesel spray flames.

  8. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  9. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    PubMed

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  10. Benchmarking Water Quality from Wastewater to Drinking Waters Using Reduced Transcriptome of Human Cells.

    PubMed

    Xia, Pu; Zhang, Xiaowei; Zhang, Hanxin; Wang, Pingping; Tian, Mingming; Yu, Hongxia

    2017-08-15

    One of the major challenges in environmental science is monitoring and assessing the risk of complex environmental mixtures. In vitro bioassays with limited key toxicological end points have been shown to be suitable to evaluate mixtures of organic pollutants in wastewater and recycled water. Omics approaches such as transcriptomics can monitor biological effects at the genome scale. However, few studies have applied omics approach in the assessment of mixtures of organic micropollutants. Here, an omics approach was developed for profiling bioactivity of 10 water samples ranging from wastewater to drinking water in human cells by a reduced human transcriptome (RHT) approach and dose-response modeling. Transcriptional expression of 1200 selected genes were measured by an Ampliseq technology in two cell lines, HepG2 and MCF7, that were exposed to eight serial dilutions of each sample. Concentration-effect models were used to identify differentially expressed genes (DEGs) and to calculate effect concentrations (ECs) of DEGs, which could be ranked to investigate low dose response. Furthermore, molecular pathways disrupted by different samples were evaluated by Gene Ontology (GO) enrichment analysis. The ability of RHT for representing bioactivity utilizing both HepG2 and MCF7 was shown to be comparable to the results of previous in vitro bioassays. Finally, the relative potencies of the mixtures indicated by RHT analysis were consistent with the chemical profiles of the samples. RHT analysis with human cells provides an efficient and cost-effective approach to benchmarking mixture of micropollutants and may offer novel insight into the assessment of mixture toxicity in water.

  11. Anti-Inflammatory Effects of a Mixture of Lactic Acid Bacteria and Sodium Butyrate in Atopic Dermatitis Murine Model.

    PubMed

    Kim, Jeong A; Kim, Sung-Hak; Kim, In Sung; Yu, Da Yoon; Kim, Sung Chan; Lee, Seung Ho; Lee, Sang Suk; Yun, Cheol-Heui; Choi, In Soon; Cho, Kwang Keun

    2018-03-20

    Atopic dermatitis is a chronic and recurrent inflammatory skin disease. Recently, probiotics have been shown to suppress allergic symptoms through immunomodulatory responses. In the present study, combinatorial effects on allergic symptoms were identified in BALB/c mice fed with a mixture of four species of probiotics, Bifidobacterium lactis, Lactobacillus casei, Lactobacillus rhamnosus, and Lactobacillus plantarum, and sodium butyrate. Following sensitization with whey protein, the mice were challenged and divided into two groups: (1) mice administered with phosphate-buffered saline as a control and (2) mice administered with the probiotic mixture and sodium butyrate. Allergic symptoms were assessed by measuring ear thicknesses, serum histamine and IL-10 concentrations, and the quantities of leaked Evans blue. T cell differentiation was determined by analyzing the T cells groups in the mesenteric lymph nodes (MLNs) and spleen. To examine changes in the total gut microbiota, total fecal microflora was isolated, species identification was performed by DNA sequencing using Illumina MiSeq, and changes in intestinal beneficial bacteria were analyzed using quantitative polymerase chain reaction. Treatment with the probiotic mixture and sodium butyrate reduced ear thicknesses, the quantity of leaked Evans blue, and serum histamine values, while increasing serum IL-10 values. In the mouse model, the probiotic mixture and sodium butyrate increased Th1 and Treg cell differentiation in MLN and spleen tissues; the ratio of Firmicutes/Bacteroidetes, which is associated with reduction in allergic reactions; and microorganisms that lead to cell differentiation into Treg. These results suggest that the probiotic mixture and sodium butyrate can prevent and alleviate allergic symptoms.

  12. Diversifying mechanisms in the on-farm evolution of crop mixtures.

    PubMed

    Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle

    2015-06-01

    While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.

  13. Identification of cortex in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    VanMeter, John W.; Sandon, Peter A.

    1992-06-01

    The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.

  14. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  15. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  16. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  17. Peptide Identification by Database Search of Mixture Tandem Mass Spectra*

    PubMed Central

    Wang, Jian; Bourne, Philip E.; Bandeira, Nuno

    2011-01-01

    In high-throughput proteomics the development of computational methods and novel experimental strategies often rely on each other. In certain areas, mass spectrometry methods for data acquisition are ahead of computational methods to interpret the resulting tandem mass spectra. Particularly, although there are numerous situations in which a mixture tandem mass spectrum can contain fragment ions from two or more peptides, nearly all database search tools still make the assumption that each tandem mass spectrum comes from one peptide. Common examples include mixture spectra from co-eluting peptides in complex samples, spectra generated from data-independent acquisition methods, and spectra from peptides with complex post-translational modifications. We propose a new database search tool (MixDB) that is able to identify mixture tandem mass spectra from more than one peptide. We show that peptides can be reliably identified with up to 95% accuracy from mixture spectra while considering only a 0.01% of all possible peptide pairs (four orders of magnitude speedup). Comparison with current database search methods indicates that our approach has better or comparable sensitivity and precision at identifying single-peptide spectra while simultaneously being able to identify 38% more peptides from mixture spectra at significantly higher precision. PMID:21862760

  18. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  19. Activity induced phase transition in mixtures of active and passive agents

    NASA Astrophysics Data System (ADS)

    Sinha Mahapatra, Pallab; Kulkarni, Ajinkya

    2017-11-01

    Collective behaviors of self-propelling agents are ubiquitous in nature that produces interesting patterns. The objective of this study is to investigate the phase transition in mixtures of active and inert agents suspended in a liquid. A modified version of the Vicsek Model has been used (see Ref.), where the particles are modeled as soft disks with finite mass, confined in a square domain. The particles are required to align their local motion to their immediate neighborhood, similar to the Vicsek model. We identified the transition from disorganized thermal-like motion to an organized vortical motion. We analyzed the nature of the transition by using different order parameters. Furthermore the switching between the phases has been investigated via artificial nucleation of randomly picked active agents spanning the entire domain. Finally the motivation for this phase transition has been explained via average dissipation and the mean square displacement (MSD) of the agents.

  20. A Bayesian mixture model for chromatin interaction data.

    PubMed

    Niu, Liang; Lin, Shili

    2015-02-01

    Chromatin interactions mediated by a particular protein are of interest for studying gene regulation, especially the regulation of genes that are associated with, or known to be causative of, a disease. A recent molecular technique, Chromatin interaction analysis by paired-end tag sequencing (ChIA-PET), that uses chromatin immunoprecipitation (ChIP) and high throughput paired-end sequencing, is able to detect such chromatin interactions genomewide. However, ChIA-PET may generate noise (i.e., pairings of DNA fragments by random chance) in addition to true signal (i.e., pairings of DNA fragments by interactions). In this paper, we propose MC_DIST based on a mixture modeling framework to identify true chromatin interactions from ChIA-PET count data (counts of DNA fragment pairs). The model is cast into a Bayesian framework to take into account the dependency among the data and the available information on protein binding sites and gene promoters to reduce false positives. A simulation study showed that MC_DIST outperforms the previously proposed hypergeometric model in terms of both power and type I error rate. A real data study showed that MC_DIST may identify potential chromatin interactions between protein binding sites and gene promoters that may be missed by the hypergeometric model. An R package implementing the MC_DIST model is available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM.

  1. Regulatory assessment of chemical mixtures: Requirements, current approaches and future perspectives.

    PubMed

    Kienzler, Aude; Bopp, Stephanie K; van der Linden, Sander; Berggren, Elisabet; Worth, Andrew

    2016-10-01

    This paper reviews regulatory requirements and recent case studies to illustrate how the risk assessment (RA) of chemical mixtures is conducted, considering both the effects on human health and on the environment. A broad range of chemicals, regulations and RA methodologies are covered, in order to identify mixtures of concern, gaps in the regulatory framework, data needs, and further work to be carried out. Also the current and potential future use of novel tools (Adverse Outcome Pathways, in silico tools, toxicokinetic modelling, etc.) in the RA of combined effects were reviewed. The assumptions made in the RA, predictive model specifications and the choice of toxic reference values can greatly influence the assessment outcome, and should therefore be specifically justified. Novel tools could support mixture RA mainly by providing a better understanding of the underlying mechanisms of combined effects. Nevertheless, their use is currently limited because of a lack of guidance, data, and expertise. More guidance is needed to facilitate their application. As far as the authors are aware, no prospective RA concerning chemicals related to various regulatory sectors has been performed to date, even though numerous chemicals are registered under several regulatory frameworks. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Lifetime Segmented Assimilation Trajectories and Health Outcomes in Latino and Other Community Residents

    PubMed Central

    Marsiglia, Flavio F.; Kulis, Stephen; Kellison, Joshua G.

    2010-01-01

    Objectives. Under an ecodevelopmental framework, we examined lifetime segmented assimilation trajectories (diverging assimilation pathways influenced by prior life conditions) and related them to quality-of-life indicators in a diverse sample of 258 men in the Pheonix, AZ, metropolitan area. Methods. We used a growth mixture model analysis of lifetime changes in socioeconomic status, and used acculturation to identify distinct lifetime segmented assimilation trajectory groups, which we compared on life satisfaction, exercise, and dietary behaviors. We hypothesized that lifetime assimilation change toward mainstream American culture (upward assimilation) would be associated with favorable health outcomes, and downward assimilation change with unfavorable health outcomes. Results. A growth mixture model latent class analysis identified 4 distinct assimilation trajectory groups. In partial support of the study hypotheses, the extreme upward assimilation trajectory group (the most successful of the assimilation pathways) exhibited the highest life satisfaction and the lowest frequency of unhealthy food consumption. Conclusions. Upward segmented assimilation is associated in adulthood with certain positive health outcomes. This may be the first study to model upward and downward lifetime segmented assimilation trajectories, and to associate these with life satisfaction, exercise, and dietary behaviors. PMID:20167890

  3. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  4. Patterning ecological risk of pesticide contamination at the river basin scale.

    PubMed

    Faggiano, Leslie; de Zwart, Dick; García-Berthou, Emili; Lek, Sovan; Gevrey, Muriel

    2010-05-01

    Ecological risk assessment was conducted to determine the risk posed by pesticide mixtures to the Adour-Garonne river basin (south-western France). The objectives of this study were to assess the general state of this basin with regard to pesticide contamination using a risk assessment procedure and to detect patterns in toxic mixture assemblages through a self-organizing map (SOM) methodology in order to identify the locations at risk. Exposure assessment, risk assessment with species sensitivity distribution, and mixture toxicity rules were used to compute six relative risk predictors for different toxic modes of action: the multi-substance potentially affected fraction of species depending on the toxic mode of action of compounds found in the mixture (msPAF CA(TMoA) values). Those predictors computed for the 131 sampling sites assessed in this study were then patterned through the SOM learning process. Four clusters of sampling sites exhibiting similar toxic assemblages were identified. In the first cluster, which comprised 83% of the sampling sites, the risk caused by pesticide mixture toward aquatic species was weak (mean msPAF value for those sites<0.0036%), while in another cluster the risk was significant (mean msPAF<1.09%). GIS mapping allowed an interesting spatial pattern of the distribution of sampling sites for each cluster to be highlighted with a significant and highly localized risk in the French department called "Lot et Garonne". The combined use of the SOM methodology, mixture toxicity modelling and a clear geo-referenced representation of results not only revealed the general state of the Adour-Garonne basin with regard to contamination by pesticides but also enabled to analyze the spatial pattern of toxic mixture assemblage in order to prioritize the locations at risk and to detect the group of compounds causing the greatest risk at the basin scale. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Effects of variation in chain length on ternary polymer electrolyte - Ionic liquid mixture - A molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Raju, S. G.; Hariharan, Krishnan S.; Park, Da-Hye; Kang, HyoRang; Kolake, Subramanya Mayya

    2015-10-01

    Molecular dynamics (MD) simulations of ternary polymer electrolyte - ionic liquid mixtures are conducted using an all-atom model. N-alkyl-N-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide ([CnMPy][TFSI], n = 1, 3, 6, 9) and polyethylene oxide (PEO) are used. Microscopic structure, energetics and dynamics of ionic liquid (IL) in these ternary mixtures are studied. Properties of these four pure IL are also calculated and compared to that in ternary mixtures. Interaction between pyrrolidinium cation and TFSI is stronger and there is larger propensity of ion-pair formation in ternary mixtures. Unlike the case in imidazolium IL, near neighbor structural correlation between TFSI reduces with increase in chain length on cation in both pure IL and ternary mixtures. Using spatial density maps, regions where PEO and TFSI interact with pyrrolidinium cation are identified. Oxygens of PEO are above and below the pyrrolidinium ring and away from the bulky alkyl groups whereas TFSI is present close to nitrogen atom of CnMPy. In pure IL, diffusion coefficient (D) of C3MPy is larger than of TFSI but D of C9MPy and C6MPy are larger than that of TFSI. The reasons for alkyl chain dependent phenomena are explored.

  6. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  7. Toward the Rational Use of Exposure Information in Mixtures Toxicology

    EPA Science Inventory

    Of all the disciplines of toxicology, perhaps none is as dependent on exposure information as Mixtures Toxicology. Identifying real world mixtures and replicating them in the laboratory (or in silico) is critical to understanding their risks. Complex mixtures such as cigarett...

  8. Two subgroups of antipsychotic-naive, first-episode schizophrenia patients identified with a Gaussian mixture model on cognition and electrophysiology

    PubMed Central

    Bak, N; Ebdrup, B H; Oranje, B; Fagerlund, B; Jensen, M H; Düring, S W; Nielsen, M Ø; Glenthøj, B Y; Hansen, L K

    2017-01-01

    Deficits in information processing and cognition are among the most robust findings in schizophrenia patients. Previous efforts to translate group-level deficits into clinically relevant and individualized information have, however, been non-successful, which is possibly explained by biologically different disease subgroups. We applied machine learning algorithms on measures of electrophysiology and cognition to identify potential subgroups of schizophrenia. Next, we explored subgroup differences regarding treatment response. Sixty-six antipsychotic-naive first-episode schizophrenia patients and sixty-five healthy controls underwent extensive electrophysiological and neurocognitive test batteries. Patients were assessed on the Positive and Negative Syndrome Scale (PANSS) before and after 6 weeks of monotherapy with the relatively selective D2 receptor antagonist, amisulpride (280.3±159 mg per day). A reduced principal component space based on 19 electrophysiological variables and 26 cognitive variables was used as input for a Gaussian mixture model to identify subgroups of patients. With support vector machines, we explored the relation between PANSS subscores and the identified subgroups. We identified two statistically distinct subgroups of patients. We found no significant baseline psychopathological differences between these subgroups, but the effect of treatment in the groups was predicted with an accuracy of 74.3% (P=0.003). In conclusion, electrophysiology and cognition data may be used to classify subgroups of schizophrenia patients. The two distinct subgroups, which we identified, were psychopathologically inseparable before treatment, yet their response to dopaminergic blockade was predicted with significant accuracy. This proof of principle encourages further endeavors to apply data-driven, multivariate and multimodal models to facilitate progress from symptom-based psychiatry toward individualized treatment regimens. PMID:28398342

  9. Notre Dame Geothermal Ionic Liquids Research: Ionic Liquids for Utilization of Geothermal Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brennecke, Joan F.

    The goal of this project was to develop ionic liquids for two geothermal energy related applications. The first goal was to design ionic liquids as high temperature heat transfer fluids. We identified appropriate compounds based on both experiments and molecular simulations. We synthesized the new ILs, and measured their thermal stability, measured storage density, viscosity, and thermal conductivity. We found that the most promising compounds for this application are aminopyridinium bis(trifluoromethylsulfonyl)imide based ILs. We also performed some measurements of thermal stability of IL mixtures and used molecular simulations to better understand the thermal conductivity of nanofluids (i.e., mixtures of ILsmore » and nanoparticles). We found that the mixtures do not follow ideal mixture theories and that the addition of nanoparticles to ILs may well have a beneficial influence on the thermal and transport properties of IL-based heat transfer fluids. The second goal was to use ionic liquids in geothermally driven absorption refrigeration systems. We performed copious thermodynamic measurements and modeling of ionic liquid/water systems, including modeling of the absorption refrigeration systems and the resulting coefficients of performance. We explored some IL/organic solvent mixtures as candidates for this application, both with experimentation and molecular simulations. We found that the COPs of all of the IL/water systems were higher than the conventional system – LiBr/H2O. Thus, IL/water systems appear very attractive for absorption refrigeration applications.« less

  10. Concentration of atomic hydrogen in a dielectric barrier discharge measured by two-photon absorption fluorescence

    NASA Astrophysics Data System (ADS)

    Dvořák, P.; Talába, M.; Obrusník, A.; Kratzer, J.; Dědina, J.

    2017-08-01

    Two-photon absorption laser-induced fluorescence (TALIF) was utilized for measuring the concentration of atomic hydrogen in a volume dielectric barrier discharge (DBD) ignited in mixtures of Ar, H2 and O2 at atmospheric pressure. The method was calibrated by TALIF of krypton diluted in argon at atmospheric pressure, proving that three-body collisions had a negligible effect on quenching of excited krypton atoms. The diagnostic study was complemented with a 3D numerical model of the gas flow and a zero-dimensional model of the chemistry in order to better understand the reaction kinetics and identify the key pathways leading to the production and destruction of atomic hydrogen. It was determined that the density of atomic hydrogen in Ar-H2 mixtures was in the order of 1021 m-3 and decreased when oxygen was added into the gas mixture. Spatially resolved measurements and simulations revealed a sharply bordered region with low atomic hydrogen concentration when oxygen was added to the gas mixture. At substoichiometric oxygen/hydrogen ratios, this H-poor region is confined to an area close to the gas inlet and it is shown that the size of this region is not only influenced by the chemistry but also by the gas flow patterns. Experimentally, it was observed that a decrease in H2 concentration in the feeding Ar-H2 mixture led to an increase in H production in the DBD.

  11. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  12. Inferring Short-Range Linkage Information from Sequencing Chromatograms

    PubMed Central

    Beggel, Bastian; Neumann-Fraune, Maria; Kaiser, Rolf; Verheyen, Jens; Lengauer, Thomas

    2013-01-01

    Direct Sanger sequencing of viral genome populations yields multiple ambiguous sequence positions. It is not straightforward to derive linkage information from sequencing chromatograms, which in turn hampers the correct interpretation of the sequence data. We present a method for determining the variants existing in a viral quasispecies in the case of two nearby ambiguous sequence positions by exploiting the effect of sequence context-dependent incorporation of dideoxynucleotides. The computational model was trained on data from sequencing chromatograms of clonal variants and was evaluated on two test sets of in vitro mixtures. The approach achieved high accuracies in identifying the mixture components of 97.4% on a test set in which the positions to be analyzed are only one base apart from each other, and of 84.5% on a test set in which the ambiguous positions are separated by three bases. In silico experiments suggest two major limitations of our approach in terms of accuracy. First, due to a basic limitation of Sanger sequencing, it is not possible to reliably detect minor variants with a relative frequency of no more than 10%. Second, the model cannot distinguish between mixtures of two or four clonal variants, if one of two sets of linear constraints is fulfilled. Furthermore, the approach requires repetitive sequencing of all variants that might be present in the mixture to be analyzed. Nevertheless, the effectiveness of our method on the two in vitro test sets shows that short-range linkage information of two ambiguous sequence positions can be inferred from Sanger sequencing chromatograms without any further assumptions on the mixture composition. Additionally, our model provides new insights into the established and widely used Sanger sequencing technology. The source code of our method is made available at http://bioinf.mpi-inf.mpg.de/publications/beggel/linkageinformation.zip. PMID:24376502

  13. Computational Modeling of Seismic Wave Propagation Velocity-Saturation Effects in Porous Rocks

    NASA Astrophysics Data System (ADS)

    Deeks, J.; Lumley, D. E.

    2011-12-01

    Compressional and shear velocities of seismic waves propagating in porous rocks vary as a function of the fluid mixture and its distribution in pore space. Although it has been possible to place theoretical upper and lower bounds on the velocity variation with fluid saturation, predicting the actual velocity response of a given rock with fluid type and saturation remains an unsolved problem. In particular, we are interested in predicting the velocity-saturation response to various mixtures of fluids with pressure and temperature, as a function of the spatial distribution of the fluid mixture and the seismic wavelength. This effect is often termed "patchy saturation' in the rock physics community. The ability to accurately predict seismic velocities for various fluid mixtures and spatial distributions in the pore space of a rock is useful for fluid detection, hydrocarbon exploration and recovery, CO2 sequestration and monitoring of many subsurface fluid-flow processes. We create digital rock models with various fluid mixtures, saturations and spatial distributions. We use finite difference modeling to propagate elastic waves of varying frequency content through these digital rock and fluid models to simulate a given lab or field experiment. The resulting waveforms can be analyzed to determine seismic traveltimes, velocities, amplitudes, attenuation and other wave phenomena for variable rock models of fluid saturation and spatial fluid distribution, and variable wavefield spectral content. We show that we can reproduce most of the published effects of velocity-saturation variation, including validating the Voigt and Reuss theoretical bounds, as well as the Hill "patchy saturation" curve. We also reproduce what has been previously identified as Biot dispersion, but in fact in our models is often seen to be wave multi-pathing and broadband spectral effects. Furthermore, we find that in addition to the dominant seismic wavelength and average fluid patch size, the smoothness of the fluid patches are a critical factor in determining the velocity-saturation response; this is a result that we have not seen discussed in the literature. Most importantly, we can reproduce all of these effects using full elastic wavefield scattering, without the need to resort to more complicated squirt-flow or poroelastic models. This is important because the physical properties and parameters we need to model full elastic wave scattering, and predict a velocity-saturation curve, are often readily available for projects we undertake; this is not the case for poroelastic or squirt-flow models. We can predict this velocity saturation curve for a specific rock type, fluid mixture distribution and wavefield spectrum.

  14. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  15. Massively parallel sequencing-enabled mixture analysis of mitochondrial DNA samples.

    PubMed

    Churchill, Jennifer D; Stoljarova, Monika; King, Jonathan L; Budowle, Bruce

    2018-02-22

    The mitochondrial genome has a number of characteristics that provide useful information to forensic investigations. Massively parallel sequencing (MPS) technologies offer improvements to the quantitative analysis of the mitochondrial genome, specifically the interpretation of mixed mitochondrial samples. Two-person mixtures with nuclear DNA ratios of 1:1, 5:1, 10:1, and 20:1 of individuals from different and similar phylogenetic backgrounds and three-person mixtures with nuclear DNA ratios of 1:1:1 and 5:1:1 were prepared using the Precision ID mtDNA Whole Genome Panel and Ion Chef, and sequenced on the Ion PGM or Ion S5 sequencer (Thermo Fisher Scientific, Waltham, MA, USA). These data were used to evaluate whether and to what degree MPS mixtures could be deconvolved. Analysis was effective in identifying the major contributor in each instance, while SNPs from the minor contributor's haplotype only were identified in the 1:1, 5:1, and 10:1 two-person mixtures. While the major contributor was identified from the 5:1:1 mixture, analysis of the three-person mixtures was more complex, and the mixed haplotypes could not be completely parsed. These results indicate that mixed mitochondrial DNA samples may be interpreted with the use of MPS technologies.

  16. Differentiation of Ecuadorian National and CCN-51 cocoa beans and their mixtures by computer vision.

    PubMed

    Jimenez, Juan C; Amores, Freddy M; Solórzano, Eddyn G; Rodríguez, Gladys A; La Mantia, Alessandro; Blasi, Paolo; Loor, Rey G

    2018-05-01

    Ecuador exports two major types of cocoa beans, the highly regarded and lucrative National, known for its fine aroma, and the CCN-51 clone type, used in bulk for mass chocolate products. In order to discourage exportation of National cocoa adulterated with CCN-51, a fast and objective methodology for distinguishing between the two types of cocoa beans is needed. This study reports a methodology based on computer vision, which makes it possible to recognize these beans and determine the percentage of their mixture. The methodology was challenged with 336 samples of National cocoa and 127 of CCN-51. By excluding the samples with a low fermentation level and white beans, the model discriminated with a precision higher than 98%. The model was also able to identify and quantify adulterations in 75 export batches of National cocoa and separate out poorly fermented beans. A scientifically reliable methodology able to discriminate between Ecuadorian National and CCN-51 cocoa beans and their mixtures was successfully developed. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  17. Development of a Rubber-Based Product Using a Mixture Experiment: A Challenging Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaya, Yahya; Piepel, Gregory F.; Caniyilmaz, Erdal

    2013-07-01

    Many products used in daily life are made by blending two or more components. The properties of such products typically depend on the relative proportions of the components. Experimental design, modeling, and data analysis methods for mixture experiments provide for efficiently determining the component proportions that will yield a product with desired properties. This article presents a case study of the work performed to develop a new rubber formulation for an o-ring (a circular gasket) with requirements specified on 10 product properties. Each step of the study is discussed, including: 1) identifying the objective of the study and requirements formore » properties of the o-ring, 2) selecting the components to vary and specifying the component constraints, 3) constructing a mixture experiment design, 4) measuring the responses and assessing the data, 5) developing property-composition models, 6) selecting the new product formulation, and 7) confirming the selected formulation in manufacturing. The case study includes some challenging and new aspects, which are discussed in the article.« less

  18. Acidolysis of p-coumaric acid with omega-3 oils and antioxidant activity of phenolipid products in in vitro and biological model systems.

    PubMed

    Wang, Jiankang; Shahidi, Fereidoon

    2014-01-15

    Lipase-catalyzed acidolysis of p-coumaric acid with seal blubber oil (SBO) and menhaden oil (MHO) was carried out, followed by identification of major phenolipids in the resultant acidolysis mixture using high-performance liquid chromatography/mass spectrometry. Separation of phenolipid components from the resultant acidolysis mixture was achieved using flash column chromatography. The antioxidant activities of the phenolipids were examined in in vitro assays and biological model systems. The major phenolipids identified from acidolysis mixtures with both SBO and MHO included eight phenolic monoacylglycerols and eight phenolic diacylglycerols. Phenolipids derived from SBO and MHO generally showed good antioxidant potential in the systems tested. The prepared phenolipids exhibited high scavenging capacity toward 1,1-diphenyl-2-picrylhydrazyl (DPPH) and peroxyl radicals and displayed reducing power, strong inhibitory effect on bleaching of β-carotene, human low-density lipoprotein (LDL) cholesterol oxidation, as well as radical-induced DNA cleavage, thus suggesting that phenolipids derived from omega-3 oils may be used as potential stable products for health promotion and disease risk reduction.

  19. A Matter of Classes: Stratifying Health Care Populations to Produce Better Estimates of Inpatient Costs

    PubMed Central

    Rein, David B

    2005-01-01

    Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501

  20. Mixture toxicity of six sulfonamides and their two transformation products to green algae Scenedesmus vacuolatus and duckweed Lemna minor.

    PubMed

    Białk-Bielińska, Anna; Caban, Magda; Pieczyńska, Aleksandra; Stepnowski, Piotr; Stolte, Stefan

    2017-04-01

    Since humans and ecosystems are continually exposed to a very complex and permanently changing mixture of chemicals, there is increasing concern in the general public about the potential adverse effects they may cause. Among all "emerging pollutants", pharmaceuticals in particular have raised great environmental concern. For these reasons the aim of our study was to evaluate the mixture toxicity of six antimicrobial sulfonamides (SAs) and their two most commonly identified degradation products - sulfanilic acid (SNA) and sulfanilamide (SN) - to limnic green algae Scenedesmus vacuolatus and duckweed Lemna minor. The ecotoxicological data for the single toxicity of SNA and SN towards selected organisms are presented. The concept of Concentration Addition (CA) was applied to estimate the effects, and less than additive effects were observed. In general terms, it seems sufficiently precautionary for the aquatic environment to consider the toxicity of a sulfonamide mixture as additive. The Concentration Addition model proves to be a reasonable worst-case estimation. Such a comparative study on the mixture toxicity of sulfonamides and their transformation products has been presented for the first time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  2. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  3. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  4. Negative Cognitive Style Trajectories in the Transition to Adolescence

    ERIC Educational Resources Information Center

    Mezulis, Amy; Funasaki, Kristyn; Hyde, Janet Shibley

    2011-01-01

    The development of negative cognitive style was examined in a longitudinal study of 366 community youth. Cognitive style and depressive symptoms were evaluated at ages 11, 13, and 15. Latent growth mixture modeling identified three unique trajectory patterns of negative cognitive style. The "normative" group (71% of the sample) displayed the least…

  5. Optimizing the experimental design using the house mouse (Mus musculus L.) as a model for determining grain feeding preferences

    USDA-ARS?s Scientific Manuscript database

    Technical Abstract: BACKGROUND: There is little research evaluating flavor preferences among wheat varieties. We previously demonstrated that mice exert very strong preferences when given binary mixtures of wheat varieties. We plan to utilize mice to identify varieties and genes associated with pref...

  6. The Chinese High School Student's Stress in the School and Academic Achievement

    ERIC Educational Resources Information Center

    Liu, Yangyang; Lu, Zuhong

    2011-01-01

    In a sample of 466 Chinese high school students, we examined the relationships between Chinese high school students' stress in the school and their academic achievements. Regression mixture modelling identified two different classes of the effects of Chinese high school students' stress on their academic achievements. One class contained 87% of…

  7. Predictors of Latent Trajectory Classes of Physical Dating Violence Victimization

    ERIC Educational Resources Information Center

    Brooks-Russell, Ashley; Foshee, Vangie A.; Ennett, Susan T.

    2013-01-01

    This study identified classes of developmental trajectories of physical dating violence victimization from grades 8 to 12 and examined theoretically-based risk factors that distinguished among trajectory classes. Data were from a multi-wave longitudinal study spanning 8th through 12th grade (n = 2,566; 51.9 % female). Growth mixture models were…

  8. Relations among Chronic Peer Group Rejection, Maladaptive Behavioral Dispositions, and Early Adolescents' Peer Perceptions

    ERIC Educational Resources Information Center

    Ladd, Gary W.; Ettekal, Idean; Kochenderfer-Ladd, Becky; Rudolph, Karen D.; Andrews, Rebecca K.

    2014-01-01

    Adolescents' perceptions of peers' relational characteristics (e.g., support, trustworthiness) were examined for subtypes of youth who evidenced chronic maladaptive behavior, chronic peer group rejection, or combinations of these risk factors. Growth mixture modeling was used to identify subgroups of participants within a normative…

  9. Dissecting a complex chemical stress: chemogenomic profiling of plant hydrolysates

    PubMed Central

    Skerker, Jeffrey M; Leon, Dacia; Price, Morgan N; Mar, Jordan S; Tarjan, Daniel R; Wetmore, Kelly M; Deutschbauer, Adam M; Baumohl, Jason K; Bauer, Stefan; Ibáñez, Ana B; Mitchell, Valerie D; Wu, Cindy H; Hu, Ping; Hazen, Terry; Arkin, Adam P

    2013-01-01

    The efficient production of biofuels from cellulosic feedstocks will require the efficient fermentation of the sugars in hydrolyzed plant material. Unfortunately, plant hydrolysates also contain many compounds that inhibit microbial growth and fermentation. We used DNA-barcoded mutant libraries to identify genes that are important for hydrolysate tolerance in both Zymomonas mobilis (44 genes) and Saccharomyces cerevisiae (99 genes). Overexpression of a Z. mobilis tolerance gene of unknown function (ZMO1875) improved its specific ethanol productivity 2.4-fold in the presence of miscanthus hydrolysate. However, a mixture of 37 hydrolysate-derived inhibitors was not sufficient to explain the fitness profile of plant hydrolysate. To deconstruct the fitness profile of hydrolysate, we profiled the 37 inhibitors against a library of Z. mobilis mutants and we modeled fitness in hydrolysate as a mixture of fitness in its components. By examining outliers in this model, we identified methylglyoxal as a previously unknown component of hydrolysate. Our work provides a general strategy to dissect how microbes respond to a complex chemical stress and should enable further engineering of hydrolysate tolerance. PMID:23774757

  10. Identification of anxiety sensitivity classes and clinical cut-scores in a sample of adult smokers: results from a factor mixture model.

    PubMed

    Allan, Nicholas P; Raines, Amanda M; Capron, Daniel W; Norr, Aaron M; Zvolensky, Michael J; Schmidt, Norman B

    2014-10-01

    Anxiety sensitivity (AS), a multidimensional construct, has been implicated in the development and maintenance of anxiety and related disorders. Recent evidence suggests that AS is a dimensional-categorical construct within individuals. Factor mixture modeling was conducted in a sample of 579 adult smokers (M age=36.87 years, SD=13.47) to examine the underlying structure. Participants completed the Anxiety Sensitivity Index-3 and were also given a Structured Clinical Interview for DSM-IV-TR. Three classes of individuals emerged, a high AS (5.2% of the sample), a moderate AS (19.0%), and a normative AS class (75.8%). A cut-score of 23 to identify high AS individuals, and a cut-score of 17 to identify moderate-to-high AS individuals were supported in this study. In addition, the odds of having a concurrent anxiety disorder (controlling for other Axis I disorders) were the highest in the high AS class and the lowest in the normative AS class. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Richard; De Leon, Phillip L.

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  12. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  13. Gene expression profiles in rainbow trout, Onchorynchus mykiss, exposed to a simple chemical mixture.

    PubMed

    Hook, Sharon E; Skillman, Ann D; Gopalan, Banu; Small, Jack A; Schultz, Irvin R

    2008-03-01

    Among proposed uses for microarrays in environmental toxiciology is the identification of key contributors to toxicity within a mixture. However, it remains uncertain whether the transcriptomic profiles resulting from exposure to a mixture have patterns of altered gene expression that contain identifiable contributions from each toxicant component. We exposed isogenic rainbow trout Onchorynchus mykiss, to sublethal levels of ethynylestradiol, 2,2,4,4-tetrabromodiphenyl ether, and chromium VI or to a mixture of all three toxicants Fluorescently labeled complementary DNA (cDNA) were generated and hybridized against a commercially available Salmonid array spotted with 16,000 cDNAs. Data were analyzed using analysis of variance (p<0.05) with a Benjamani-Hochberg multiple test correction (Genespring [Agilent] software package) to identify up and downregulated genes. Gene clustering patterns that can be used as "expression signatures" were determined using hierarchical cluster analysis. The gene ontology terms associated with significantly altered genes were also used to identify functional groups that were associated with toxicant exposure. Cross-ontological analytics approach was used to assign functional annotations to genes with "unknown" function. Our analysis indicates that transcriptomic profiles resulting from the mixture exposure resemble those of the individual contaminant exposures, but are not a simple additive list. However, patterns of altered genes representative of each component of the mixture are clearly discernible, and the functional classes of genes altered represent the individual components of the mixture. These findings indicate that the use of microarrays to identify transcriptomic profiles may aid in the identification of key stressors within a chemical mixture, ultimately improving environmental assessment.

  14. General baseline toxicity QSAR for nonpolar, polar and ionisable chemicals and their mixtures in the bioluminescence inhibition assay with Aliivibrio fischeri.

    PubMed

    Escher, Beate I; Baumer, Andreas; Bittermann, Kai; Henneberger, Luise; König, Maria; Kühnert, Christin; Klüver, Nils

    2017-03-22

    The Microtox assay, a bioluminescence inhibition assay with the marine bacterium Aliivibrio fischeri, is one of the most popular bioassays for assessing the cytotoxicity of organic chemicals, mixtures and environmental samples. Most environmental chemicals act as baseline toxicants in this short-term screening assay, which is typically run with only 30 min of exposure duration. Numerous Quantitative Structure-Activity Relationships (QSARs) exist for the Microtox assay for nonpolar and polar narcosis. However, typical water pollutants, which have highly diverse structures covering a wide range of hydrophobicity and speciation from neutral to anionic and cationic, are often outside the applicability domain of these QSARs. To include all types of environmentally relevant organic pollutants we developed a general baseline toxicity QSAR using liposome-water distribution ratios as descriptors. Previous limitations in availability of experimental liposome-water partition constants were overcome by reliable prediction models based on polyparameter linear free energy relationships for neutral chemicals and the COSMOmic model for charged chemicals. With this QSAR and targeted mixture experiments we could demonstrate that ionisable chemicals fall in the applicability domain. Most investigated water pollutants acted as baseline toxicants in this bioassay, with the few outliers identified as uncouplers or reactive toxicants. The main limitation of the Microtox assay is that chemicals with a high melting point and/or high hydrophobicity were outside of the applicability domain because of their low water solubility. We quantitatively derived a solubility cut-off but also demonstrated with mixture experiments that chemicals inactive on their own can contribute to mixture toxicity, which is highly relevant for complex environmental mixtures, where these chemicals may be present at concentrations below the solubility cut-off.

  15. Mechanism-based classification of PAH mixtures to predict carcinogenic potential

    DOE PAGES

    Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; ...

    2015-04-22

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[ a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[ def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profilesmore » measured in skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.« less

  16. Mechanism-Based Classification of PAH Mixtures to Predict Carcinogenic Potential.

    PubMed

    Tilton, Susan C; Siddens, Lisbeth K; Krueger, Sharon K; Larkin, Andrew J; Löhr, Christiane V; Williams, David E; Baird, William M; Waters, Katrina M

    2015-07-01

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway-based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP, or environmental PAH mixtures (Mix 1-3) following a 2-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC > BaP = Mix2 = Mix3 > Mix1 = Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared with tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (P < .05) for DNA damage, apoptosis, response to chemical stimulus, and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. These data further provide a 'source-to-outcome' model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action-based risk assessment could be employed for environmental PAH mixtures. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  18. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  19. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  2. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  3. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Antagonistic interactions of an arsenic-containing mixture in a multiple organ carcinogenicity bioassay.

    PubMed

    Pott, W A; Benjamin, S A; Yang, R S

    1998-11-27

    Inorganic arsenic (As), 1,2-dichloroethane (DCE), vinyl chloride (VC) and trichloroethylene (TCE) are frequently identified as groundwater contaminants near hazardous waste disposal sites. While the carcinogenicity of each of these chemicals has been extensively studied individually, little information exists regarding their carcinogenic potential in combination. Therefore, we investigated the carcinogenic promoting potential of chemical mixtures containing arsenic, DCE, VC and TCE following multiple initiator administration in a multiple organ carcinogenicity bioassay (N. Ito, T. Shirai, S. Fukushima, Medium-term bioassay for carcinogens using multiorgan models, in: N. Ito, H. Sugano (Eds.), Modification of Tumor Development in Rodents, Prog. Exp. Tumor Res., 33, 41-57, Basel, Karger, 1991). Our results reveal a dose-responsive antagonistic effect of this four-chemical mixture on the development of preneoplastic hepatic lesions (altered hepatocellular foci and glutathione S-transferase pi positive foci) as well as bronchioalveolar hyperplasia and adenoma formation.

  5. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  6. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  7. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  8. Evaluation of DNA mixtures from database search.

    PubMed

    Chung, Yuk-Ka; Hu, Yue-Qing; Fung, Wing K

    2010-03-01

    With the aim of bridging the gap between DNA mixture analysis and DNA database search, a novel approach is proposed to evaluate the forensic evidence of DNA mixtures when the suspect is identified by the search of a database of DNA profiles. General formulae are developed for the calculation of the likelihood ratio for a two-person mixture under general situations including multiple matches and imperfect evidence. The influence of the prior probabilities on the weight of evidence under the scenario of multiple matches is demonstrated by a numerical example based on Hong Kong data. Our approach is shown to be capable of presenting the forensic evidence of DNA mixtures in a comprehensive way when the suspect is identified through database search.

  9. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images

    PubMed Central

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-01-01

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236

  10. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    PubMed

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  11. Hidden drivers of low-dose pharmaceutical pollutant mixtures revealed by the novel GSA-QHTS screening method

    PubMed Central

    Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca

    2016-01-01

    The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294

  12. Risk assessment of occupational exposure to heavy metal mixtures: a study protocol.

    PubMed

    Omrane, Fatma; Gargouri, Imed; Khadhraoui, Moncef; Elleuch, Boubaker; Zmirou-Navier, Denis

    2018-03-05

    Sfax is a very industrialized city located in the southern region of Tunisia where heavy metals (HMs) pollution is now an established matter of fact. The health of its residents mainly those engaged in industrial metals-based activities is under threat. Indeed, such workers are being exposed to a variety of HMs mixtures, and this exposure has cumulative properties. Whereas current HMs exposure assessment is mainly carried out using direct air monitoring approaches, the present study aims to assess health risks associated with chronic occupational exposure to HMs in industry, using a modeling approach that will be validated later on. To this end, two questionnaires were used. The first was an identification/descriptive questionnaire aimed at identifying, for each company: the specific activities, materials used, manufactured products and number of employees exposed. The second related to the job-task of the exposed persons, workplace characteristics (dimensions, ventilation, etc.), type of metals and emission configuration in space and time. Indoor air HMs concentrations were predicted, based on the mathematical models generally used to estimate occupational exposure to volatile substances (such as solvents). Later on, and in order to validate the adopted model, air monitoring will be carried out, as well as some biological monitoring aimed at assessing HMs excretion in the urine of workers volunteering to participate. Lastly, an interaction-based hazard index HI int and a decision support tool will be used to predict the cumulative risk assessment for HMs mixtures. One hundred sixty-one persons working in the 5 participating companies have been identified. Of these, 110 are directly engaged with HMs in the course of the manufacturing process. This model-based prediction of occupational exposure represents an alternative tool that is both time-saving and cost-effective in comparison with direct air monitoring approaches. Following validation of the different models according to job processes, via comparison with direct measurements and exploration of correlations with biological monitoring, these estimates will allow a cumulative risk characterization.

  13. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  14. The generalized van der Waals theory of pure fluids and mixtures: Annual report for September 1985 to November 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandler, S.I.

    1986-01-01

    The objective of the work is to use the generalized van der Waals theory, as derived earlier (''The Generalized van der Waals Partition Function I. Basic Theory'' by S.I. Sandler, Fluid Phase Equilibria 19, 233 (1985)) to: (1) understand the molecular level assumptions inherent in current thermodynamic models; (2) use theory and computer simulation studies to test these assumptions; and (3) develop new, improved thermodynamic models based on better molecular level assumptions. From such a fundamental study, thermodynamic models will be developed that will be applicable to mixtures of molecules of widely different size and functionality, as occurs in themore » processing of heavy oils, coal liquids and other synthetic fuels. An important aspect of our work is to reduce our fundamental theoretical developments to engineering practice through extensive testing and evaluation with experimental data on real mixtures. During the first year of this project important progress was made in the areas specified in the original proposal, as well as several subsidiary areas identified as the work progressed. Some of this work has been written up and submitted for publication. Manuscripts acknowledging DOE support, together with a very brief description, are listed herein.« less

  15. Poromechanics of compressible charged porous media using the theory of mixtures.

    PubMed

    Huyghe, J M; Molenaar, M M; Baajens, F P T

    2007-10-01

    Osmotic, electrostatic, and/or hydrational swellings are essential mechanisms in the deformation behavior of porous media, such as biological tissues, synthetic hydrogels, and clay-rich rocks. Present theories are restricted to incompressible constituents. This assumption typically fails for bone, in which electrokinetic effects are closely coupled to deformation. An electrochemomechanical formulation of quasistatic finite deformation of compressible charged porous media is derived from the theory of mixtures. The model consists of a compressible charged porous solid saturated with a compressible ionic solution. Four constituents following different kinematic paths are identified: a charged solid and three streaming constituents carrying either a positive, negative, or no electrical charge, which are the cations, anions, and fluid, respectively. The finite deformation model is reduced to infinitesimal theory. In the limiting case without ionic effects, the presented model is consistent with Blot's theory. Viscous drag compression is computed under closed circuit and open circuit conditions. Viscous drag compression is shown to be independent of the storage modulus. A compressible version of the electrochemomechanical theory is formulated. Using material parameter values for bone, the theory predicts a substantial influence of density changes on a viscous drag compression simulation. In the context of quasistatic deformations, conflicts between poromechanics and mixture theory are only semantic in nature.

  16. Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.

    PubMed

    Rabosky, Daniel L

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.

  17. Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees

    PubMed Central

    Rabosky, Daniel L.

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858

  18. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  19. Understanding the hydrodynamics of the Congo River

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Bates, Paul

    2014-05-01

    We present the results of the first hydrodynamic model of the middle reach of the Congo Basin, which helps our understanding of the behaviour of the second largest river in the world. In data sparse area, hydrodynamic models, utilizing a mixture of limited in-situ measurements and remotely sensed datasets, can be used to understand and identify key features that control large river systems. Unlike previous hydrodynamic models for the Congo Basin, which concentrated on only a small area, we look at the entire length of the Congo's middle reach and its six main tributaries (Kasai, Ubangai, Sangha, Ruki, Lulonga and Lomami). This corresponds to: a drainage area of approximately two and a half million kilometres squared; over 5000 kilometres of river channels; and incorporates some of the largest and most important global wetlands. The hydrodynamic model is driven by a mixture of in-situ and modelled discharges. In situ measurements are available at five locations. Two were obtained from the Global River Discharge Centre (GRDC) at Kinshasa and Bangui, and data for Kisangani, Ouesso and Lediba were obtained from local agencies in the Democratic Republic of the Congo and the Republic of Congo. Using the gauging station at Kinshasa as the downstream boundary, the remaining in-situ measurements account for 61 percent of the discharge and represent 72 percent of the total drainage area. Modelled discharges are used to account for the missing discharge and corresponding area. Calibration and validation of the model was undertaken using a mixture of in-situ measurements, discharge and water level at Kinshasa, and water surface heights along the main reach obtained from both laser and radar altimeters. Through the hydrodynamic model we will investigate: how important constraints, identified by a previous study, are to the behaviour of the Congo; what impacts the wetlands have on the Congo Basin; how the wetlands and main channel interact with each other. Our results will provide new insight into the behaviour of the middle reach of the Congo Basin which otherwise would not be possible without extensive field work.

  20. Widom Lines in Binary Mixtures of Supercritical Fluids.

    PubMed

    Raju, Muralikrishna; Banuti, Daniel T; Ma, Peter C; Ihme, Matthias

    2017-06-08

    Recent experiments on pure fluids have identified distinct liquid-like and gas-like regimes even under supercritical conditions. The supercritical liquid-gas transition is marked by maxima in response functions that define a line emanating from the critical point, referred to as Widom line. However, the structure of analogous state transitions in mixtures of supercritical fluids has not been determined, and it is not clear whether a Widom line can be identified for binary mixtures. Here, we present first evidence for the existence of multiple Widom lines in binary mixtures from molecular dynamics simulations. By considering mixtures of noble gases, we show that, depending on the phase behavior, mixtures transition from a liquid-like to a gas-like regime via distinctly different pathways, leading to phase relationships of surprising complexity and variety. Specifically, we show that miscible binary mixtures have behavior analogous to a pure fluid and the supercritical state space is characterized by a single liquid-gas transition. In contrast, immiscible binary mixture undergo a phase separation in which the clusters transition separately at different temperatures, resulting in multiple distinct Widom lines. The presence of this unique transition behavior emphasizes the complexity of the supercritical state to be expected in high-order mixtures of practical relevance.

  1. Environmentally relevant chemical mixtures of concern in waters of United States tributaries to the Great Lakes

    USGS Publications Warehouse

    Elliott, Sarah M.; Brigham, Mark E.; Kiesling, Richard L.; Schoenfuss, Heiko L.; Jorgenson, Zachary G.

    2018-01-01

    The North American Great Lakes are a vital natural resource that provide fish and wildlife habitat, as well as drinking water and waste assimilation services for millions of people. Tributaries to the Great Lakes receive chemical inputs from various point and nonpoint sources, and thus are expected to have complex mixtures of chemicals. However, our understanding of the co‐occurrence of specific chemicals in complex mixtures is limited. To better understand the occurrence of specific chemical mixtures in the US Great Lakes Basin, surface water from 24 US tributaries to the Laurentian Great Lakes was collected and analyzed for diverse suites of organic chemicals, primarily focused on chemicals of concern (e.g., pharmaceuticals, personal care products, fragrances). A total of 181 samples and 21 chemical classes were assessed for mixture compositions. Basin wide, 1664 mixtures occurred in at least 25% of sites. The most complex mixtures identified comprised 9 chemical classes and occurred in 58% of sampled tributaries. Pharmaceuticals typically occurred in complex mixtures, reflecting pharmaceutical‐use patterns and wastewater facility outfall influences. Fewer mixtures were identified at lake or lake‐influenced sites than at riverine sites. As mixture complexity increased, the probability of a specific mixture occurring more often than by chance greatly increased, highlighting the importance of understanding source contributions to the environment. This empirically based analysis of mixture composition and occurrence may be used to focus future sampling efforts or mixture toxicity assessments. 

  2. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  3. Own and Friends' Smoking Attitudes and Social Preference as Early Predictors of Adolescent Smoking

    ERIC Educational Resources Information Center

    Otten, Roy; Wanner, Brigitte; Vitaro, Frank; Engels, Rutger C. M. E.

    2008-01-01

    This study examined the role of friends' attitudes in adolescent smoking (N = 203). Growth mixture modeling was used to identify three trajectories of smoking behavior from ages 12 to 14 years: a "low-rate" group, an "increasing-rate" group, and a "high-rate" group. Adolescents' own and their friends' attitudes at age…

  4. The Developmental Trajectories of Depressive Symptoms in Early Adolescence: An Examination of School-Related Factors

    ERIC Educational Resources Information Center

    Wu, Pei-Chen

    2017-01-01

    This study investigated the heterogeneity of depressive symptom trajectories and the roles of school-related factors in predicting the membership of different trajectories in a sample of early adolescents in Taiwan. In all, 870 junior high school students were followed for 3 years. Using growth mixture modeling, the study identified four distinct…

  5. Formulation of 3D Printed Tablet for Rapid Drug Release by Fused Deposition Modeling: Screening Polymers for Drug Release, Drug-Polymer Miscibility and Printability.

    PubMed

    Solanki, Nayan G; Tahsin, Md; Shah, Ankita V; Serajuddin, Abu T M

    2018-01-01

    The primary aim of this study was to identify pharmaceutically acceptable amorphous polymers for producing 3D printed tablets of a model drug, haloperidol, for rapid release by fused deposition modeling. Filaments for 3D printing were prepared by hot melt extrusion at 150°C with 10% and 20% w/w of haloperidol using Kollidon ® VA64, Kollicoat ® IR, Affinsiol ™ 15 cP, and HPMCAS either individually or as binary blends (Kollidon ® VA64 + Affinisol ™ 15 cP, 1:1; Kollidon ® VA64 + HPMCAS, 1:1). Dissolution of crushed extrudates was studied at pH 2 and 6.8, and formulations demonstrating rapid dissolution rates were then analyzed for drug-polymer, polymer-polymer and drug-polymer-polymer miscibility by film casting. Polymer-polymer (1:1) and drug-polymer-polymer (1:5:5 and 2:5:5) mixtures were found to be miscible. Tablets with 100% and 60% infill were printed using MakerBot printer at 210°C, and dissolution tests of tablets were conducted at pH 2 and 6.8. Extruded filaments of Kollidon ® VA64-Affinisol ™ 15 cP mixtures were flexible and had optimum mechanical strength for 3D printing. Tablets containing 10% drug with 60% and 100% infill showed complete drug release at pH 2 in 45 and 120 min, respectively. Relatively high dissolution rates were also observed at pH 6.8. The 1:1-mixture of Kollidon ® VA64 and Affinisol ™ 15 cP was thus identified as a suitable polymer system for 3D printing and rapid drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Protein construct storage: Bayesian variable selection and prediction with mixtures.

    PubMed

    Clyde, M A; Parmigiani, G

    1998-07-01

    Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.

  7. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  8. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  9. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  10. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  12. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  13. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  14. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  15. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  16. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  17. Chemistry of the outer planets: Investigations of the chemical nature of the atmosphere of Titan

    NASA Technical Reports Server (NTRS)

    Scattergood, Thomas W.

    1985-01-01

    It is clear from the experiments that a variety of complex organic models can be produced by lightning in a Titan-like gas mixture. The dominant products were found to be acetylene and hydrogen cyanide, with smaller amounts of many other species. Any aerosol produced by lightning inititated process will consist of a complex mixture of organic compounds, many of which should easily be identified by pyrolytic gas chromatography. Work will continue to expand the data base of molecules produced by lightning and other processes in order to assist in the design of appropriate analytical instruments for the upcoming Saturn/Titan mission and any other planetary probes.

  18. Disentangling incentives effects of insurance coverage from adverse selection in the case of drug expenditure: a finite mixture approach.

    PubMed

    Munkin, Murat K; Trivedi, Pravin K

    2010-09-01

    This paper takes a finite mixture approach to model heterogeneity in incentive and selection effects of drug coverage on total drug expenditure among the Medicare elderly US population. Evidence is found that the positive drug expenditures of the elderly population can be decomposed into two groups different in the identified selection effects and interpreted as relatively healthy with lower average expenditures and relatively unhealthy with higher average expenditures, accounting for approximately 25 and 75% of the population, respectively. Adverse selection into drug insurance appears to be strong for the higher expenditure component and weak for the lower expenditure group. Copyright (c) 2010 John Wiley & Sons, Ltd.

  19. Prediction of molecular separation of polar-apolar mixtures on heterogeneous metal-organic frameworks: HKUST-1.

    PubMed

    Van Assche, Tom R C; Duerinck, Tim; Van der Perre, Stijn; Baron, Gino V; Denayer, Joeri F M

    2014-07-08

    Due to the combination of metal ions and organic linkers and the presence of different types of cages and channels, metal-organic frameworks often possess a large structural and chemical heterogeneity, complicating their adsorption behavior, especially for polar-apolar adsorbate mixtures. By allocating isotherms to individual subunits in the structure, the ideal adsorbed solution theory (IAST) can be adjusted to cope with this heterogeneity. The binary adsorption of methanol and n-hexane on HKUST-1 is analyzed using this segregated IAST (SIAST) approach and offers a significant improvement over the standard IAST model predictions. It identifies the various HKUST-1 cages to have a pronounced polar or apolar adsorptive behavior.

  20. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  1. Validation and refinement of mixture volumetric material properties identified in superpave monitoring project II : phase II.

    DOT National Transportation Integrated Search

    2015-02-01

    This study was initiated to validate and refine mixture volumetric material properties identified in the : Superpave Monitoring Project II. It has been found that differences in performance are primarily controlled : by differences in gradation and r...

  2. Latent Subgroup Analysis of a Randomized Clinical Trial Through a Semiparametric Accelerated Failure Time Mixture Model

    PubMed Central

    Altstein, L.; Li, G.

    2012-01-01

    Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608

  3. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  4. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  5. Identification of Allelic Imbalance with a Statistical Model for Subtle Genomic Mosaicism

    PubMed Central

    Xia, Rui; Vattathil, Selina; Scheet, Paul

    2014-01-01

    Genetic heterogeneity in a mixed sample of tumor and normal DNA can confound characterization of the tumor genome. Numerous computational methods have been proposed to detect aberrations in DNA samples from tumor and normal tissue mixtures. Most of these require tumor purities to be at least 10–15%. Here, we present a statistical model to capture information, contained in the individual's germline haplotypes, about expected patterns in the B allele frequencies from SNP microarrays while fully modeling their magnitude, the first such model for SNP microarray data. Our model consists of a pair of hidden Markov models—one for the germline and one for the tumor genome—which, conditional on the observed array data and patterns of population haplotype variation, have a dependence structure induced by the relative imbalance of an individual's inherited haplotypes. Together, these hidden Markov models offer a powerful approach for dealing with mixtures of DNA where the main component represents the germline, thus suggesting natural applications for the characterization of primary clones when stromal contamination is extremely high, and for identifying lesions in rare subclones of a tumor when tumor purity is sufficient to characterize the primary lesions. Our joint model for germline haplotypes and acquired DNA aberration is flexible, allowing a large number of chromosomal alterations, including balanced and imbalanced losses and gains, copy-neutral loss-of-heterozygosity (LOH) and tetraploidy. We found our model (which we term J-LOH) to be superior for localizing rare aberrations in a simulated 3% mixture sample. More generally, our model provides a framework for full integration of the germline and tumor genomes to deal more effectively with missing or uncertain features, and thus extract maximal information from difficult scenarios where existing methods fail. PMID:25166618

  6. Differential correlation for sequencing data.

    PubMed

    Siska, Charlotte; Kechris, Katerina

    2017-01-19

    Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.

  7. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  8. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  9. Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures

    PubMed Central

    Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald

    2017-01-01

    In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587

  10. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  11. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  13. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  14. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  15. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  16. A Process View on Implementing an Antibullying Curriculum: How Teachers Differ and What Explains the Variation

    ERIC Educational Resources Information Center

    Haataja, Anne; Ahtola, Annarilla; Poskiparta, Elisa; Salmivalli, Christina

    2015-01-01

    The present study provides a person-centered view on teachers' adherence to the KiVa antibullying curriculum over a school year. Factor mixture modeling was used to examine how teachers (N = 282) differed in their implementation profiles and multinomial logistic regression was used to identify factors related to these profiles. On the basis of…

  17. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

  18. Determination of the combustion behavior for pure components and mixtures using a 20-liter sphere

    NASA Astrophysics Data System (ADS)

    Mashuga, Chad Victor

    1999-11-01

    The safest method to prevent fires and explosions of flammable vapors is to prevent the existence of flammable mixtures in the first place. This methodology requires detailed knowledge of the flammability region as a function of the fuel, oxygen, and nitrogen concentrations. A triangular flammability diagram is the most useful tool to display the flammability region, and to determine if a flammable mixture is present during plant operations. An automated apparatus for assessing the flammability region and for determining the potential effect of confined fuel-air explosions is described. Data derived from the apparatus included the limits of combustion, maximum combustion pressure, and the deflagration index, or KG. Accurate measurement of these parameters can be influenced by numerous experimental conditions, including igniter energy, humidity and gas composition. Gas humidity had a substantial effect on the deflagration index, but had little effect on the maximum combustion pressure. Small changes in gas compositions had a greater effect on the deflagration index than the maximum combustion pressure. Both the deflagration indices and the maximum combustion pressure proved insensitive to the range of igniter energies examined. Estimation of flammability limits using a calculated adiabatic flame temperature (CAFT) method is demonstrated. The CAFT model is compared with the extensive experimental data from this work for methane, ethylene and a 50/50 mixture of methane and ethylene. The CAFT model compares well to methane and ethylene throughout the flammability zone when using a 1200K threshold temperature. Deviations between the method and the experimental data occurs in the fuel rich region. For the 50/50 fuel mixture the CAFT deviates only in the fuel rich region---the inclusion of carbonaceous soot as one of the equilibrium products improved the fit. Determination of burning velocities from a spherical flame model utilizing the extensive pressure---time data was also completed. The burning velocities determined compare well to other investigators using this method. The data collected for the methane/ethylene mixture was used to evaluate mixing rules for the flammability limits, maximum combustion pressure, deflagration index, and burning velocity. These rules attempt to predict the behavior of fuel mixtures from pure component data. Le Chatelier's law and averaging both work well for predicting the flammability boundary in the fuel lean region and for mixtures of inerted fuel and air. Both methods underestimate the flammability boundary in the fuel rich region. For a mixture of methane and ethylene, we were unable to identify mixing rules for estimating the maximum combustion pressure and the burning velocity from pure component data. Averaging the deflagration indices for fuel air mixtures did provide a adequate estimation of the mixture behavior. Le Chatelier's method overestimated the maximum deflagration index in air but provided a satisfactory estimation in the extreme fuel lean and rich regions.

  19. Spatial clustering of metal and metalloid mixtures in unregulated water sources on the Navajo Nation - Arizona, New Mexico, and Utah, USA.

    PubMed

    Hoover, Joseph H; Coker, Eric; Barney, Yolanda; Shuey, Chris; Lewis, Johnnye

    2018-08-15

    Contaminant mixtures are identified regularly in public and private drinking water supplies throughout the United States; however, the complex and often correlated nature of mixtures makes identification of relevant combinations challenging. This study employed a Bayesian clustering method to identify subgroups of water sources with similar metal and metalloid profiles. Additionally, a spatial scan statistic assessed spatial clustering of these subgroups and a human health metric was applied to investigate potential for human toxicity. These methods were applied to a dataset comprised of metal and metalloid measurements from unregulated water sources located on the Navajo Nation, in the southwest United States. Results indicated distinct subgroups of water sources with similar contaminant profiles and that some of these subgroups were spatially clustered. Several profiles had metal and metalloid concentrations that may have potential for human toxicity including arsenic, uranium, lead, manganese, and selenium. This approach may be useful for identifying mixtures in water sources, spatially evaluating the clusters, and help inform toxicological research investigating mixtures. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  1. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  2. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  3. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  4. Computational analysis of the Phanerochaete chrysosporium v2.0 genome database and mass spectrometry identiWcation of peptides in ligninolytic cultures reveal complex mixtures of secreted proteins

    Treesearch

    Amber Vanden Wymelenberg; Patrick Minges; Grzegorz Sabat; Diego Martinez; Andrea Aerts; Asaf Salamov; Igor Grigoriev; Harris Shapiro; Nik Putnam; Paula Belinky; Carlos Dosoretz; Jill Gaskell; Phil Kersten; Dan Cullen

    2006-01-01

    The white-rot basidiomycete Phanerochaete chrysosporium employs extracellular enzymes to completely degrade the major polymers of wood: cellulose, hemicellulose, and lignin. Analysis of a total of 10,048 v2.1 gene models predicts 769 secreted proteins, a substantial increase over the 268 models identified in the earlier database (v1.0). Within the v2.1 ‘computational...

  5. Biochemometrics for Natural Products Research: Comparison of Data Analysis Approaches and Application to Identification of Bioactive Compounds.

    PubMed

    Kellogg, Joshua J; Todd, Daniel A; Egan, Joseph M; Raja, Huzefa A; Oberlies, Nicholas H; Kvalheim, Olav M; Cech, Nadja B

    2016-02-26

    A central challenge of natural products research is assigning bioactive compounds from complex mixtures. The gold standard approach to address this challenge, bioassay-guided fractionation, is often biased toward abundant, rather than bioactive, mixture components. This study evaluated the combination of bioassay-guided fractionation with untargeted metabolite profiling to improve active component identification early in the fractionation process. Key to this methodology was statistical modeling of the integrated biological and chemical data sets (biochemometric analysis). Three data analysis approaches for biochemometric analysis were compared, namely, partial least-squares loading vectors, S-plots, and the selectivity ratio. Extracts from the endophytic fungi Alternaria sp. and Pyrenochaeta sp. with antimicrobial activity against Staphylococcus aureus served as test cases. Biochemometric analysis incorporating the selectivity ratio performed best in identifying bioactive ions from these extracts early in the fractionation process, yielding altersetin (3, MIC 0.23 μg/mL) and macrosphelide A (4, MIC 75 μg/mL) as antibacterial constituents from Alternaria sp. and Pyrenochaeta sp., respectively. This study demonstrates the potential of biochemometrics coupled with bioassay-guided fractionation to identify bioactive mixture components. A benefit of this approach is the ability to integrate multiple stages of fractionation and bioassay data into a single analysis.

  6. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  7. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  8. Microsiemens or Milligrams: Measures of Ionic Mixtures ...

    EPA Pesticide Factsheets

    In December of 2016, EPA released the Draft Field-Based Methods for Developing Aquatic Life Criteria for Specific Conductivity for public comment. Once final, states and authorized tribes may use these methods to derive field-based ecoregional ambient Aquatic Life Ambient Water Quality Criteria (AWQC) for specific conductivity (SC) in flowing waters. The methods provide flexible approaches for developing science-based SC criteria that reflect ecoregional or state specific factors. The concentration of a dissolved salt mixture can be measured in a number of ways including measurement of total dissolved solids, freezing point depression, refractive index, density, or the sum of the concentrations of individually measured ions. For the draft method, SC was selected as the measure because SC is a measure of all ions in the mixture; the measurement technology is fast, inexpensive, and accurate, and it measures only dissolved ions. When developing water quality criteria for major ions, some stakeholders may prefer to identify the ionic constituents as a measure of exposure instead of SC. A field-based method was used to derive example chronic and acute water quality criteria for SC and two anions a common mixture of ions (bicarbonate plus sulfate, [HCO3−] + [SO42−] in mg/L) that represent common mixtures in streams. These two anions are sufficient to model the ion mixture and SC (R2 = 0.94). Using [HCO3−] + [SO42−] does not imply that these two anions are the

  9. Testing job typologies and identifying at-risk subpopulations using factor mixture models.

    PubMed

    Keller, Anita C; Igic, Ivana; Meier, Laurenz L; Semmer, Norbert K; Schaubroeck, John M; Brunner, Beatrice; Elfering, Achim

    2017-10-01

    Research in occupational health psychology has tended to focus on the effects of single job characteristics or various job characteristics combined into 1 factor. However, such a variable-centered approach does not account for the clustering of job attributes among groups of employees. We addressed this issue by using a person-centered approach to (a) investigate the occurrence of different empirical constellations of perceived job stressors and resources and (b) validate the meaningfulness of profiles by analyzing their association with employee well-being and performance. We applied factor mixture modeling to identify profiles in 4 large samples consisting of employees in Switzerland (Studies 1 and 2) and the United States (Studies 3 and 4). We identified 2 profiles that spanned the 4 samples, with 1 reflecting a combination of relatively low stressors and high resources (P1) and the other relatively high stressors and low resources (P3). The profiles differed mainly in terms of their organizational and social aspects. Employees in P1 reported significantly higher mean levels of job satisfaction, performance, and general health, and lower means in exhaustion compared with P3. Additional analyses showed differential relationships between job attributes and outcomes depending on profile membership. These findings may benefit organizational interventions as they show that perceived work stressors and resources more strongly influence satisfaction and well-being in particular profiles. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  11. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  12. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  13. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  15. Exploring the plasma chemistry in microwave chemical vapor deposition of diamond from C/H/O gas mixtures.

    PubMed

    Kelly, Mark W; Richley, James C; Western, Colin M; Ashfold, Michael N R; Mankelevich, Yuri A

    2012-09-27

    Microwave (MW)-activated CH(4)/CO(2)/H(2) gas mixtures operating under conditions relevant to diamond chemical vapor deposition (i.e., X(C/Σ) = X(elem)(C)/(X(elem)(C) + X(elem)(O)) ≈ 0.5, H(2) mole fraction = 0.3, pressure, p = 150 Torr, and input power, P = 1 kW) have been explored in detail by a combination of spatially resolved absorption measurements (of CH, C(2)(a), and OH radicals and H(n = 2) atoms) within the hot plasma region and companion 2-dimensional modeling of the plasma. CO and H(2) are identified as the dominant species in the plasma core. The lower thermal conductivity of such a mixture (cf. the H(2)-rich plasmas used in most diamond chemical vapor deposition) accounts for the finding that CH(4)/CO(2)/H(2) plasmas can yield similar maximal gas temperatures and diamond growth rates at lower input powers than traditional CH(4)/H(2) plasmas. The plasma chemistry and composition is seen to switch upon changing from oxygen-rich (X(C/Σ) < 0.5) to carbon-rich (X(C/Σ) > 0.5) source gas mixtures and, by comparing CH(4)/CO(2)/H(2) (X(C/Σ) = 0.5) and CO/H(2) plasmas, to be sensitive to the choice of source gas (by virtue of the different prevailing gas activation mechanisms), in contrast to C/H process gas mixtures. CH(3) radicals are identified as the most abundant C(1)H(x) [x = 0-3] species near the growing diamond surface within the process window for successful diamond growth (X(C/Σ) ≈ 0.5-0.54) identified by Bachmann et al. (Diamond Relat. Mater.1991, 1, 1). This, and the findings of similar maximal gas temperatures (T(gas) ~2800-3000 K) and H atom mole fractions (X(H)~5-10%) to those found in MW-activated C/H plasmas, points to the prevalence of similar CH(3) radical based diamond growth mechanisms in both C/H and C/H/O plasmas.

  16. Mimosa: Mixture Model of Co-expression to Detect Modulators of Regulatory Interaction

    NASA Astrophysics Data System (ADS)

    Hansen, Matthew; Everett, Logan; Singh, Larry; Hannenhalli, Sridhar

    Functionally related genes tend to be correlated in their expression patterns across multiple conditions and/or tissue-types. Thus co-expression networks are often used to investigate functional groups of genes. In particular, when one of the genes is a transcription factor (TF), the co-expression-based interaction is interpreted, with caution, as a direct regulatory interaction. However, any particular TF, and more importantly, any particular regulatory interaction, is likely to be active only in a subset of experimental conditions. Moreover, the subset of expression samples where the regulatory interaction holds may be marked by presence or absence of a modifier gene, such as an enzyme that post-translationally modifies the TF. Such subtlety of regulatory interactions is overlooked when one computes an overall expression correlation. Here we present a novel mixture modeling approach where a TF-Gene pair is presumed to be significantly correlated (with unknown coefficient) in a (unknown) subset of expression samples. The parameters of the model are estimated using a Maximum Likelihood approach. The estimated mixture of expression samples is then mined to identify genes potentially modulating the TF-Gene interaction. We have validated our approach using synthetic data and on three biological cases in cow and in yeast. While limited in some ways, as discussed, the work represents a novel approach to mine expression data and detect potential modulators of regulatory interactions.

  17. Contaminant source identification using semi-supervised machine learning

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel

    2018-05-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).

  18. Contaminant source identification using semi-supervised machine learning

    DOE PAGES

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    2017-11-08

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  19. Contaminant source identification using semi-supervised machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  20. Recovery of isopropyl alcohol from waste solvent of a semiconductor plant.

    PubMed

    Lin, Sheng H; Wang, Chuen S

    2004-01-30

    An important waste solvent generated in the semiconductor manufacturing process was characterized by high isopropyl alcohol (IPA) concentration over 65%, other organic pollutants and strong color. Because of these characteristics, IPA recovery was deemed as a logic choice for tackling this waste solvent. In the present work, an integrated method consisting of air stripping in conjunction with condensation and packed activated carbon fiber (ACF) adsorption for dealing with this waste solvent. The air stripping with proper stripping temperature control was employed to remove IPA from the waste solvent and the IPA vapor in the gas mixture was condensed out in a side condenser. The residual IPA remaining in the gas mixture exiting the side condenser was efficiently removed in a packed ACF column. The air stripping with condensation was able to recover up to 93% of total IPA in the initial waste solvent. The residual IPA in the gas mixture, representing less than 3% of the initial IPA, was efficiently captured in the packed ACF column. Experimental tests were conducted to examine the performances of each unit and to identify the optimum operating conditions. Theoretical modeling of the experimental IPA breakthrough curves was also undertaken using a macroscopic model. The verified breakthrough model significantly facilitates the adsorption column design. The recovered IPA was found to be of high purity and could be considered for reuse. Copyright 2003 Elsevier B.V.

  1. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  2. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  3. Mixis and Diagnôsis: Aristotle and the "Chemistry" of the Sublunary World.

    PubMed

    Viano, Cristina

    2015-08-01

    In On Generation and Corruption 1.10, Aristotle introduces the new idea of "chemical mixture" (mixis) to explain the constitution of those homogeneous substances from which all things in the sublunary world are comprised. In a mixture, the ingredients interact with one another to give rise to a new substance, qualitatively different, yet preserving the original ingredients in potentia, so that they can be separated again. In Book IV of the Meteorologica, Aristotle further suggests that bodies may be "diagnosed" according to certain passive properties, such as the fusibility of metals. While his theory of mixture has often led historians of science to identify Aristotle as one of the precursors of chemical science, his ideas have also been criticised as archaic, and implicated in a qualitative conception of the cosmos that delayed progress towards quantifying natural phenomena. In this paper, I take up the defence of Aristotle's theory by showing that his concept of mixture is not an obstacle to the development of natural science and chemistry, but, on the contrary, opens the way by offering an advanced model of qualitative analysis which does not exclude the possibility of quantitative development.

  4. Use of Mixture Designs to Investigate Contribution of Minor Sex Pheromone Components to Trap Catch of the Carpenterworm Moth, Chilecomadia valdiviana.

    PubMed

    Lapointe, Stephen L; Barros-Parada, Wilson; Fuentes-Contreras, Eduardo; Herrera, Heidy; Kinsho, Takeshi; Miyake, Yuki; Niedz, Randall P; Bergmann, Jan

    2017-12-01

    Field experiments were carried out to study responses of male moths of the carpenterworm, Chilecomadia valdiviana (Lepidoptera: Cossidae), a pest of tree and fruit crops in Chile, to five compounds previously identified from the pheromone glands of females. Previously, attraction of males to the major component, (7Z,10Z)-7,10-hexadecadienal, was clearly demonstrated while the role of the minor components was uncertain due to the use of an experimental design that left large portions of the design space unexplored. We used mixture designs to study the potential contributions to trap catch of the four minor pheromone components produced by C. valdiviana. After systematically exploring the design space described by the five pheromone components, we concluded that the major pheromone component alone is responsible for attraction of male moths in this species. The need for appropriate experimental designs to address the problem of assessing responses to mixtures of semiochemicals in chemical ecology is described. We present an analysis of mixture designs and response surface modeling and an explanation of why this approach is superior to commonly used, but statistically inappropriate, designs.

  5. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  6. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  7. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. X-RAY IRRADIATION OF H{sub 2}O + CO ICE MIXTURES WITH SYNCHROTRON LIGHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiménez-Escobar, A.; Ciaravella, A.; Micela, G.

    2016-03-20

    We irradiated a (4:1) mixture of water and carbon monoxide with soft X-rays of energies up to 1.2 keV. The experiments were performed using the spherical grating monochromator beamline at National Synchrotron Radiation Research Center in Taiwan. Both monochromatic (300 and 900 eV) and broader energy fluxes (250–1200 eV) were employed. During the irradiation, the H{sub 2}O + CO mixture was ionized, excited, and fragmented, producing a number of reactive species. The composition of the ice has been monitored throughout both the irradiation and warm-up phases. We identified several products, which can be related through a plausible chemical reaction scheme. Such chemistrymore » is initiated by the injection of energetic photoelectrons that produce multiple ionization events generating a secondary electron cascade. The results have been discussed in light of a model for protoplanetary disks around young solar-type stars.« less

  9. Effects of Radiative Emission and Absorption on the Propagation and Extinction of Premixed Gas Flames

    NASA Technical Reports Server (NTRS)

    Ju, Yiguang; Masuya, Goro; Ronney, Paul D.

    1998-01-01

    Premixed gas flames in mixtures of CH4, O2, N2, and CO2 were studied numerically using detailed chemical and radiative emission-absorption models to establish the conditions for which radiatively induced extinction limits may exist independent of the system dimensions. It was found that reabsorption of emitted radiation led to substantially higher burning velocities and wider extinction limits than calculations using optically thin radiation models, particularly when CO2, a strong absorber, is present in the unburned gas, Two heat loss mechanisms that lead to flammability limits even with reabsorption were identified. One is that for dry hydrocarbon-air mixtures, because of the differences in the absorption spectra of H2O and CO2, most of the radiation from product H2O that is emitted in the upstream direction cannot be absorbed by the reactants. The second is that the emission spectrum Of CO2 is broader at flame temperatures than ambient temperature: thus, some radiation emitted near the flame front cannot be absorbed by the reactants even when they are seeded with CO2 Via both mechanisms, some net upstream heat loss due to radiation will always occur, leading to extinction of sufficiently weak mixtures. Downstream loss has practically no influence. Comparison with experiment demonstrates the importance of reabsorption in CO2 diluted mixtures. It is concluded that fundamental flammability limits can exist due to radiative heat loss, but these limits are strongly dependent on the emission-absorption spectra of the reactant and product -gases and their temperature dependence and cannot be predicted using gray-gas or optically thin model parameters. Applications to practical flames at high pressure, in large combustion chambers, and with exhaust-gas or flue-gas recirculation are discussed.

  10. ION COMPOSITION ELUCIDATION (ICE): A HIGH RESOLUTION MASS SPECTROMETRIC TECHNIQUE FOR IDENTIFYING COMPOUNDS IN COMPLEX MIXTURES

    EPA Science Inventory

    When tentatively identifying compounds in complex mixtures using mass spectral libraries, multiple matches or no plausible matches due to a high level of chemical noise or interferences can occur. Worse yet, most analytes are not in the libraries. In each case, Ion Composition El...

  11. Criteria for Remote Sensing Detection of Sulfate Cemented Soils on Mars

    NASA Technical Reports Server (NTRS)

    Cooper, Christopher D.; Mustard, John F.

    2000-01-01

    Spectral measurements of loose and cemented mixtures of palagonitic soil and sulfates were made to determine whether cemented soils could be identified on Mars. Cemented MgSO4 mixtures exhibit an enhanced 9 micron sulfate fundamental compared to gypsum mixtures due to more diffuse and pervasive cementing.

  12. Integrated Disinfection By-Products Research: Assessing Reproductive and Developmental Risks Posed by Complex Disinfection By-Product Mixtures

    EPA Science Inventory

    This article presents a toxicologically-based risk assessment strategy for identifying the individual components or fractions of a complex mixture that are associated with its toxicity. The strategy relies on conventional component-based mixtures risk approaches such as dose addi...

  13. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  14. Recognition of the Component Odors in Mixtures

    PubMed Central

    Fletcher, Dane B; Hettinger, Thomas P

    2017-01-01

    Abstract Natural olfactory stimuli are volatile-chemical mixtures in which relative perceptual saliencies determine which odor-components are identified. Odor identification also depends on rapid selective adaptation, as shown for 4 odor stimuli in an earlier experimental simulation of natural conditions. Adapt-test pairs of mixtures of water-soluble, distinct odor stimuli with chemical features in common were studied. Identification decreased for adapted components but increased for unadapted mixture-suppressed components, showing compound identities were retained, not degraded to individual molecular features. Four additional odor stimuli, 1 with 2 perceptible odor notes, and an added “water-adapted” control tested whether this finding would generalize to other 4-compound sets. Selective adaptation of mixtures of the compounds (odors): 3 mM benzaldehyde (cherry), 5 mM maltol (caramel), 1 mM guaiacol (smoke), and 4 mM methyl anthranilate (grape-smoke) again reciprocally unmasked odors of mixture-suppressed components in 2-, 3-, and 4-component mixtures with 2 exceptions. The cherry note of “benzaldehyde” (itself) and the shared note of “methyl anthranilate and guaiacol” (together) were more readily identified. The pervasive mixture-component dominance and dynamic perceptual salience may be mediated through peripheral adaptation and central mutual inhibition of neural responses. Originating in individual olfactory receptor variants, it limits odor identification and provides analytic properties for momentary recognition of a few remaining mixture-components. PMID:28641388

  15. Competency criteria and the class inclusion task: modeling judgments and justifications.

    PubMed

    Thomas, H; Horton, J J

    1997-11-01

    Preschool age children's class inclusion task responses were modeled as mixtures of different probability distributions. The main idea: Different response strategies are equivalent to different probability distributions. A child displays cognitive strategy s if P (child uses strategy s, given the child's observed score X = x) = p(s) is the most probable strategy. The general approach is widely applicable to many settings. Both judgment and justification questions were asked. Judgment response strategies identified were subclass comparison, guessing, and inclusion logic. Children's justifications lagged their judgments in development. Although justification responses may be useful, C. J. Brainerd was largely correct: If a single response variable is to be selected, a judgments variable is likely the preferable one. But the process must be modeled to identify cognitive strategies, as B. Hodkin has demonstrated.

  16. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  17. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  18. Mechanistic explanation of time-dependent cross-phenomenon based on quorum sensing: A case study of the mixture of sulfonamide and quorum sensing inhibitor to bioluminescence of Aliivibrio fischeri.

    PubMed

    Sun, Haoyu; Pan, Yongzheng; Gu, Yue; Lin, Zhifen

    2018-07-15

    Cross-phenomenon in which the concentration-response curve (CRC) for a mixture crosses the CRC for the reference model has been identified in many studies, expressed as a heterogeneous pattern of joint toxic action. However, a mechanistic explanation of the cross-phenomenon has thus far been extremely insufficient. In this study, a time-dependent cross-phenomenon was observed, in which the cross-concentration range between the CRC for the mixture of sulfamethoxypyridazine (SMP) and (Z-)-4-Bromo-5-(bromomethylene)-2(5H)-furanone (C30) to the bioluminescence of Aliivibrio fischeri (A. fischeri) and the CRC for independent action model with 95% confidence bands varied from low-concentration to higher-concentration regions in a timely manner expressed the joint toxic action of the mixture changing with an increase of both concentration and time. Through investigating the time-dependent hormetic effects of SMP and C30 (by measuring the expression of protein mRNA, simulating the bioluminescent reaction and analyzing the toxic action), the underlying mechanism was as follows: SMP and C30 acted on the quorum sensing (QS) system of A. fischeri, which induced low-concentration stimulatory effects and high-concentration inhibitory effects; in the low-concentration region, the stimulatory effects of SMP and C30 made the mixture produce a synergistic stimulation on the bioluminescence; thus, the joint toxic action exhibited antagonism. In the high-concentration region, the inhibitory effects of SMP and C30 in the mixture caused a double block in the loop circuit of the QS system; thus, the joint toxic action exhibited synergism. With the increase of time, these stimulatory and inhibitory effects of SMP and C30 were changed by the variation of the QS system at different growth phases, resulting in the time-dependent cross-phenomenon. This study proposes an induced mechanism for time-dependent cross-phenomenon based on QS, which may provide new insight into the mechanistic investigation of time-dependent cross-phenomenon, benefitting the environmental risk assessment of mixtures. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Shallow cells in directional solidification

    NASA Technical Reports Server (NTRS)

    Merchant, G. J.; Davis, S. H.

    1989-01-01

    The existing theory on two-dimensional transitions (appropriate to thin parallel-plate geometries) is presented in such a way that it is possible to identify easily conditions for the onset of shallow cells. Conditions are given under which succinonitrile-acetone mixtures should undergo supercritical bifurcation in experimentally accessible ranges. These results suggest a means for the quantitative test of the Mullins and Sekerka (1964) model and its weakly nonlinear extensions.

  20. Classes of Trajectory in Mobile Phone Dependency and the Effects of Negative Parenting on Them during Early Adolescence

    ERIC Educational Resources Information Center

    Seo, Mijung; Choi, Eunsil

    2018-01-01

    The aim of this study was to identify the classes of trajectory in mobile phone dependency using growth mixture modeling among Korean early adolescents from elementary school to the middle school transition. The effects of negative parenting on determining the classes were also examined. The participants were 2,378 early adolescents in the Korean…

  1. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  2. Chemical characterization and ecotoxicity of three soil foaming agents used in mechanized tunneling.

    PubMed

    Baderna, Diego; Lomazzi, Eleonora; Passoni, Alice; Pogliaghi, Alberto; Petoumenou, Maria Ifigeneia; Bagnati, Renzo; Lodi, Marco; Viarengo, Aldo; Sforzini, Susanna; Benfenati, Emilio; Fanelli, Roberto

    2015-10-15

    The construction of tunnels and rocks with mechanized drills produces several tons of rocky debris that are today recycled as construction material or as soil replacement for covering rocky areas. The lack of accurate information about the environmental impact of these excavated rocks and foaming agents added during the excavation process has aroused increasing concern for ecosystems and human health. The present study proposes an integrated approach to the assessment of the potential environmental impact of three foaming agents containing different anionic surfactants and other polymers currently on the market and used in tunnel boring machines. The strategy includes chemical characterization with high resolution mass spectrometry techniques to identify the components of each product, the use of in silico tools to perform a similarity comparison among these compounds and some pollutants already listed in regulatory frameworks to identify possible threshold concentrations of contamination, and the application of a battery of ecotoxicological assays to investigate the impact of each foaming mixture on model organisms of soil (higher plants and Eisenia andrei) and water communities (Daphnia magna). The study identified eleven compounds not listed on the material safety data sheets for which we have identified possible concentrations of contamination based on existing regulatory references. The bioassays allowed us to determine the no effect concentrations (NOAECs) of the three mixtures, which were subsequently used as threshold concentration for the product in its entirety. The technical mixtures used in this study have a different degree of toxicity and the predicted environmental concentrations based on the conditions of use are lower than the NOAEC for soils but higher than the NOAEC for water, posing a potential risk to the waters due to the levels of foaming agents in the muck. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. [Identification of alkylbenzenes being formed in the model reaction of ribose with lysine].

    PubMed

    Biller, Elzbieta

    2012-01-01

    While studying volatile compounds in model experiments which simulated the broiling of meat (the reactions of ribose with lysine), there were alkylbenzenes identified. They belong to food contaminants and they could be originated from the detergents and petroleum as well as geochemical samples, but they were also obtained in Maillard reactions. The aim of the studies was the attempt of the alkylbenzenes identification being formed in the model reaction of ribose with lysine. Aqueous solutions of ribose and lysine (at concentration of 0.1 mol/dm3 each) were mixed in equal volumes 10 cm3 + 10 cm3. The pH of the mixtures were adjusted to 5.6 using citrate-phosphorous buffer. In that way conditions simulating pH of meat were obtained. The mixtures were heated inside the gastronomic roaster during 0, 5, 10, 15, 30, 45 and 60 minutes respectively, at the temperature 185 +/- 5 degrees C. After reactions, in the mixtures, the profiles of volatile compounds, including alkylbenzenes, were analyzed by GC-MS method. The compounds were being identified by: comparing each mass spectrum (MS) with spectra from the known libraries of MS; calculating the linear retention indexes (LRI); seeking similar LRI values of analogue compounds in literature. Amounts of volatiles were calculated in relation to amount of internal standard (IS) [-], dividing the area of the compound by area of IS. The kinds and amounts of alkylbenzenes depended on the duration of the reaction time. Maximally 16 various alkylbenzenes were developed. More of these compounds could be identified with the probability of 85-90%, using only MS, because of the lack information in literature. Moreover, the multi-dimensional GCxGC-MS or other chromatographic methods in order to make these compounds being better explored seems to be advisable. The identification of the compounds being formed during broiling of meat is very important, because of the fact that many of arising substances are considered to be unhealthy and undesirable food contaminants. Thus these compounds should be routinely investigated in food products.

  4. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  5. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  6. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  7. Facile hyphenation of gas chromatography and a microcantilever array sensor for enhanced selectivity.

    PubMed

    Chapman, Peter J; Vogt, Frank; Dutta, Pampa; Datskos, Panos G; Devault, Gerald L; Sepaniak, Michael J

    2007-01-01

    The very simple coupling of a standard, packed-column gas chromatograph with a microcantilever array (MCA) is demonstrated for enhanced selectivity and potential analyte identification in the analysis of volatile organic compounds (VOCs). The cantilevers in MCAs are differentially coated on one side with responsive phases (RPs) and produce bending responses of the cantilevers due to analyte-induced surface stresses. Generally, individual components are difficult to elucidate when introduced to MCA systems as mixtures, although pattern recognition techniques are helpful in identifying single components, binary mixtures, or composite responses of distinct mixtures (e.g., fragrances). In the present work, simple test VOC mixtures composed of acetone, ethanol, and trichloroethylene (TCE) in pentane and methanol and acetonitrile in pentane are first separated using a standard gas chromatograph and then introduced into a MCA flow cell. Significant amounts of response diversity to the analytes in the mixtures are demonstrated across the RP-coated cantilevers of the array. Principal component analysis is used to demonstrate that only three components of a four-component VOC mixture could be identified without mixture separation. Calibration studies are performed, demonstrating a good linear response over 2 orders of magnitude for each component in the primary study mixture. Studies of operational parameters including column temperature, column flow rate, and array cell temperature are conducted. Reproducibility studies of VOC peak areas and peak heights are also carried out showing RSDs of less than 4 and 3%, respectively, for intra-assay studies. Of practical significance is the facile manner by which the hyphenation of a mature separation technique and the burgeoning sensing approach is accomplished, and the potential to use pattern recognition techniques with MCAs as a new type of detector for chromatography with analyte-identifying capabilities.

  8. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  9. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  10. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  11. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  12. Exchangeable Ions Are Responsible for the In Vitro Antibacterial Properties of Natural Clay Mixtures

    PubMed Central

    Otto, Caitlin C.; Haydel, Shelley E.

    2013-01-01

    We have identified a natural clay mixture that exhibits in vitro antibacterial activity against a broad spectrum of bacterial pathogens. We collected four samples from the same source and demonstrated through antibacterial susceptibility testing that these clay mixtures have markedly different antibacterial activity against Escherichia coli and methicillin-resistant Staphylococcus aureus (MRSA). Here, we used X-ray diffraction (XRD) and inductively coupled plasma – optical emission spectroscopy (ICP-OES) and – mass spectrometry (ICP-MS) to characterize the mineralogical and chemical features of the four clay mixture samples. XRD analyses of the clay mixtures revealed minor mineralogical differences between the four samples. However, ICP analyses demonstrated that the concentrations of many elements, Fe, Co, Cu, Ni, and Zn, in particular, vary greatly across the four clay mixture leachates. Supplementation of a non-antibacterial leachate containing lower concentrations of Fe, Co, Ni, Cu, and Zn to final ion concentrations and a pH equivalent to that of the antibacterial leachate generated antibacterial activity against E. coli and MRSA, confirming the role of these ions in the antibacterial clay mixture leachates. Speciation modeling revealed increased concentrations of soluble Cu2+ and Fe2+ in the antibacterial leachates, compared to the non-antibacterial leachates, suggesting these ionic species specifically are modulating the antibacterial activity of the leachates. Finally, linear regression analyses comparing the log10 reduction in bacterial viability to the concentration of individual ion species revealed positive correlations with Zn2+ and Cu2+ and antibacterial activity, a negative correlation with Fe3+, and no correlation with pH. Together, these analyses further indicate that the ion concentration of specific species (Fe2+, Cu2+, and Zn2+) are responsible for antibacterial activity and that killing activity is not solely attributed to pH. PMID:23691149

  13. Transitional Life Events and Trajectories of Cigarette and Alcohol Use During Emerging Adulthood: Latent Class Analysis and Growth Mixture Modeling

    PubMed Central

    Huh, Jimi; Huang, Zhaoqing; Liao, Yue; Pentz, Maryann; Chou, Chih-Ping

    2013-01-01

    Objective: Emerging adulthood (ages 18–25 years) has been associated with elevated substance use. Transitional life events (TLEs) during emerging adulthood in relation to substance use are usually examined separately, rather than as a constellation. The purposes of this study were (a) to explore distinct subgroups experiencing various TLEs during emerging adulthood, (b) to identify heterogeneous trajectories of cigarette and alcohol use during emerging adulthood, and (c) to examine the association of TLEs with cigarette and alcohol use trajectories. Method: Five waves of longitudinal data (mean age range: 19.5–26.0 years) were used from a community-based drug prevention program (n = 946, 49.9% female). Distinct subgroups of emerging adults who experienced various TLEs were identified using latent class analysis. Cigarette and alcohol use were examined using a latent growth mixture model. Results: A three-class model fit the data best in identifying TLE subgroups (new family, college attenders [NFCA]; uncommitted relationships, college attenders [URCA]; hibernators [HBN]). Three-trajectory models fit the data best for cigarette and alcohol use during emerging adulthood. The TLE categories were significantly associated with the cigarette (p < .05) and alcohol use groups (p < .001); specifically, the URCA and HBN groups were significantly more likely to be classified as accelerating cigarette users, relative to NFCA (ps < .05). The NFCA and HBN groups were significantly more likely to be classified as accelerating alcohol users, relative to URCA (ps < .01). Conclusions: To characterize an “at-risk” emerging adult group for cigarette and alcohol use over time, a range of life events during emerging adulthood should be considered. Interventions tailored to young adulthood may benefit from targeting the absence of these life events typifying “independence” as a potential marker for underlying substance use problems and provide supplemental screening methods to identify young adults with similar issues. PMID:23948532

  14. Decoding with limited neural data: a mixture of time-warped trajectory models for directional reaches.

    PubMed

    Corbett, Elaine A; Perreault, Eric J; Körding, Konrad P

    2012-06-01

    Neuroprosthetic devices promise to allow paralyzed patients to perform the necessary functions of everyday life. However, to allow patients to use such tools it is necessary to decode their intent from neural signals such as electromyograms (EMGs). Because these signals are noisy, state of the art decoders integrate information over time. One systematic way of doing this is by taking into account the natural evolution of the state of the body--by using a so-called trajectory model. Here we use two insights about movements to enhance our trajectory model: (1) at any given time, there is a small set of likely movement targets, potentially identified by gaze; (2) reaches are produced at varying speeds. We decoded natural reaching movements using EMGs of muscles that might be available from an individual with spinal cord injury. Target estimates found from tracking eye movements were incorporated into the trajectory model, while a mixture model accounted for the inherent uncertainty in these estimates. Warping the trajectory model in time using a continuous estimate of the reach speed enabled accurate decoding of faster reaches. We found that the choice of richer trajectory models, such as those incorporating target or speed, improves decoding particularly when there is a small number of EMGs available.

  15. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  17. [Simulation model for estimating the cancer care infrastructure required by the public health system].

    PubMed

    Gomes Junior, Saint Clair Santos; Almeida, Rosimary Terezinha

    2009-02-01

    To develop a simulation model using public data to estimate the cancer care infrastructure required by the public health system in the state of São Paulo, Brazil. Public data from the Unified Health System database regarding cancer surgery, chemotherapy, and radiation therapy, from January 2002-January 2004, were used to estimate the number of cancer cases in the state. The percentages recorded for each therapy in the Hospital Cancer Registry of Brazil were combined with the data collected from the database to estimate the need for services. Mixture models were used to identify subgroups of cancer cases with regard to the length of time that chemotherapy and radiation therapy were required. A simulation model was used to estimate the infrastructure required taking these parameters into account. The model indicated the need for surgery in 52.5% of the cases, radiation therapy in 42.7%, and chemotherapy in 48.5%. The mixture models identified two subgroups for radiation therapy and four subgroups for chemotherapy with regard to mean usage time for each. These parameters allowed the following estimated infrastructure needs to be made: 147 operating rooms, 2 653 operating beds, 297 chemotherapy chairs, and 102 radiation therapy devices. These estimates suggest the need for a 1.2-fold increase in the number of chemotherapy services and a 2.4-fold increase in the number of radiation therapy services when compared with the parameters currently used by the public health system. A simulation model, such as the one used in the present study, permits better distribution of health care resources because it is based on specific, local needs.

  18. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  19. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  20. The impact of catchment source group classification on the accuracy of sediment fingerprinting outputs.

    PubMed

    Pulley, Simon; Foster, Ian; Collins, Adrian L

    2017-06-01

    The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Thermal - Hydraulic Behavior of Unsaturated Bentonite and Sand-Bentonite Material as Seal for Nuclear Waste Repository: Numerical Simulation of Column Experiments

    NASA Astrophysics Data System (ADS)

    Ballarini, E.; Graupner, B.; Bauer, S.

    2015-12-01

    For deep geological repositories of high-level radioactive waste (HLRW), bentonite and sand bentonite mixtures are investigated as buffer materials to form a a sealing layer. This sealing layer surrounds the canisters and experiences an initial drying due to the heat produced by HLRW and a successive re-saturation with fluid from the host rock. These complex thermal, hydraulic and mechanical processes interact and were investigated in laboratory column experiments using MX-80 clay pellets as well as a mixture of 35% sand and 65% bentonite. The aim of this study is to both understand the individual processes taking place in the buffer materials and to identify the key physical parameters that determine the material behavior under heating and hydrating conditions. For this end, detailed and process-oriented numerical modelling was applied to the experiments, simulating heat transport, multiphase flow and mechanical effects from swelling. For both columns, the same set of parameters was assigned to the experimental set-up (i.e. insulation, heater and hydration system), while the parameters of the buffer material were adapted during model calibration. A good fit between model results and data was achieved for temperature, relative humidity, water intake and swelling pressure, thus explaining the material behavior. The key variables identified by the model are the permeability and relative permeability, the water retention curve and the thermal conductivity of the buffer material. The different hydraulic and thermal behavior of the two buffer materials observed in the laboratory observations was well reproduced by the numerical model.

  2. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  3. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  4. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  5. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  6. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  7. Encoding the local connectivity patterns of fMRI for cognitive task and state classification.

    PubMed

    Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T

    2018-06-15

    In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.

  8. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  9. Comparative Characterization of Crofelemer Samples Using Data Mining and Machine Learning Approaches With Analytical Stability Data Sets.

    PubMed

    Nariya, Maulik K; Kim, Jae Hyun; Xiong, Jian; Kleindl, Peter A; Hewarathna, Asha; Fisher, Adam C; Joshi, Sangeeta B; Schöneich, Christian; Forrest, M Laird; Middaugh, C Russell; Volkin, David B; Deeds, Eric J

    2017-11-01

    There is growing interest in generating physicochemical and biological analytical data sets to compare complex mixture drugs, for example, products from different manufacturers. In this work, we compare various crofelemer samples prepared from a single lot by filtration with varying molecular weight cutoffs combined with incubation for different times at different temperatures. The 2 preceding articles describe experimental data sets generated from analytical characterization of fractionated and degraded crofelemer samples. In this work, we use data mining techniques such as principal component analysis and mutual information scores to help visualize the data and determine discriminatory regions within these large data sets. The mutual information score identifies chemical signatures that differentiate crofelemer samples. These signatures, in many cases, would likely be missed by traditional data analysis tools. We also found that supervised learning classifiers robustly discriminate samples with around 99% classification accuracy, indicating that mathematical models of these physicochemical data sets are capable of identifying even subtle differences in crofelemer samples. Data mining and machine learning techniques can thus identify fingerprint-type attributes of complex mixture drugs that may be used for comparative characterization of products. Copyright © 2017 American Pharmacists Association®. All rights reserved.

  10. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  11. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  12. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  13. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  14. Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.

    PubMed

    Li, Haoxiang; Hua, Gang

    2018-04-01

    Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.

  15. Patterns of Crime and Drug Use Trajectories in Relation to Treatment Initiation and 5-Year Outcomes: An Application of Growth Mixture Modeling across Three Data Sets

    ERIC Educational Resources Information Center

    Prendergast, Michael; Huang, David; Hser, Yih-Ing

    2008-01-01

    Drug abusers vary considerably in their drug use and criminal behavior over time, and these trajectories are likely to influence drug treatment participation and treatment outcomes. Drawing on longitudinal natural history data from three samples of adult male drug users, we identify four groups with distinctive drug use and crime trajectories…

  16. Identification of isopropylbiphenyl, alkyl diphenylmethanes, diisopropylnaphthalene, linear alkyl benzenes and other polychlorinated biphenyl replacement compounds in effluents, sediments and fish in the Fox River System, Wisconsin

    USGS Publications Warehouse

    Peterman, Paul H.; Delfino, Joseph J.

    1990-01-01

    Five polychlorinated biphenyl replacement dye solvents and a diluent present in carbonless copy paper were identified by gas chromatography/mass spectrometry in the following matrices: effluents from a de-inking–recycling paper mill and a municipal wastewater treatment plant receiving wastewaters from a carbonless copy paper manufacturing plant; sediments; and fish collected near both discharges in the Fox River System, Wisconsin. An isopropylbiphenyl dye solvent mixture included mono-, di- and triisopropylbiphenyls. Also identified were two dye solvent mixtures marketed under the trade name Santosol. Santosol 100 comprised ethyl-diphenylmethanes (DPMs), benzyl-ethyl-DPMs, and dibenzyl-ethyl-DPMs. Similarly, Santosol 150 comprised dimethyl-DPMs, benzyl-dimethyl-DPMs, and dibenzyl-dimethyl-DPMs. Diisopropylnaphthalenes, widely used as a dye solvent in Japan, were identified for the first time in the US environment. sec-Butylbiphenyls and di-sec-butylbiphenyls, likely constituents of a sec-butylbiphenyl dye solvent mixture, were tentatively identified. Linear alkyl benzenes (C10 to C13-LABs) constituted the Alkylate 215 diluent mixture. Although known to occur as minor constituents in linear alkyl sulfonate detergents, LAB residues have not been previously attributed to commercial use of LABs.

  17. Identification of a pheromone that increases anxiety in rats

    PubMed Central

    Inagaki, Hideaki; Kiyokawa, Yasushi; Tamogami, Shigeyuki; Watanabe, Hidenori; Takeuchi, Yukari; Mori, Yuji

    2014-01-01

    Chemical communication plays an important role in the social lives of various mammalian species. Some of these chemicals are called pheromones. Rats release a specific odor into the air when stressed. This stress-related odor increases the anxiety levels of other rats; therefore, it is possible that the anxiety-causing molecules are present in the stress-related odorants. Here, we have tried to identify the responsible molecules by using the acoustic startle reflex as a bioassay system to detect anxiogenic activity. After successive fractionation of the stress-related odor, we detected 4-methylpentanal and hexanal in the final fraction that still possessed anxiogenic properties. Using synthetic molecules, we found that minute amounts of the binary mixture, but not either molecule separately, increased anxiety in rats. Furthermore, we determined that the mixture increased a specific type of anxiety and evoked anxiety-related behavioral responses in an experimental model that was different from the acoustic startle reflex. Analyses of neural mechanisms proposed that the neural circuit related to anxiety was only activated when the two molecules were simultaneously perceived by two olfactory systems. We concluded that the mixture is a pheromone that increases anxiety in rats. To our knowledge, this is the first study identifying a rat pheromone. Our results could aid further research on rat pheromones, which would enhance our understanding of chemical communication in mammals. PMID:25512532

  18. Identifying Complex Mixtures in the Environment with Cheminformatics and Non-targeted High Resolution Mass Spectrometry (SETAC NA Focused Topic Meeting : Risk Assessment of Chemical Mixtures)

    EPA Science Inventory

    Non-target high resolution mass spectrometry techniques combined with advanced cheminformatics offer huge potential for exploring complex mixtures in our environment – yet also offers plenty of challenges. Peak inventories of several non-target studies from within Europe reveal t...

  19. ANALYSES OF THE INTERACTIONS WITHIN BINARY MIXTURES OF CARCINOGENIC PAHS USING MORPHOLOGICAL CELL TRANSFORMATION OF C3H10T1/2CL8 CELLS

    EPA Science Inventory

    ANALYSES OF THE INTERACTIONS WITHIN BINARY MIXTURES OF CARCINOGENIC PAHS USING MORPHOLOGICAL CELL TRANSFORMATION OF C3HIOT1/2 CL8 CELLS.

    Studies of defined mixtures of carcinogenic polycyclic aromatic hydrocarbons (PAH) have identified three major categories of interacti...

  20. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  1. Heterogeneity in the Relationship of Substance Use to Risky Sexual Behavior Among Justice-Involved Youth: A Regression Mixture Modeling Approach.

    PubMed

    Schmiege, Sarah J; Bryan, Angela D

    2016-04-01

    Justice-involved adolescents engage in high levels of risky sexual behavior and substance use, and understanding potential relationships among these constructs is important for effective HIV/STI prevention. A regression mixture modeling approach was used to determine whether subgroups could be identified based on the regression of two indicators of sexual risk (condom use and frequency of intercourse) on three measures of substance use (alcohol, marijuana and hard drugs). Three classes were observed among n = 596 adolescents on probation: none of the substances predicted outcomes for approximately 18 % of the sample; alcohol and marijuana use were predictive for approximately 59 % of the sample, and marijuana use and hard drug use were predictive in approximately 23 % of the sample. Demographic, individual difference, and additional sexual and substance use risk variables were examined in relation to class membership. Findings are discussed in terms of understanding profiles of risk behavior among at-risk youth.

  2. Interactions of oversulfated chondroitin sulfate (OSCS) from different sources with unfractionated heparin.

    PubMed

    Gray, Angel; Litinas, Evangelos; Jeske, Walter; Fareed, Jawed; Hoppensteadt, Debra

    2012-01-01

    In 2008, oversulfated chondroitin sulfate (OSCS) was identified as the main contaminant in recalled heparin. Oversulfated chondroitin sulfate can be prepared from bovine (B), porcine (P), shark (Sh), or skate (S) origin and may produce changes in the antithrombotic, bleeding, and hemodynamic profile of heparins. This study examines the interactions of various OSCSs on heparin in animal models of thrombosis and bleeding, as well as on the anticoagulant and antiprotease effects in in vitro assays. Mixtures of 70% unfractionated heparin (UFH) with 30% OSCS from different sources were tested. In the in vitro activated partial thromboplastin time (aPTT) assay, all contaminant mixtures showed a decrease in clotting times. In addition, a significant increase in bleeding time compared to the control (UFH/saline) was observed. In the thrombosis model, no significant differences were observed. The OSCSs significantly increased anti-Xa activity in ex vivo blood samples. These results indicate that various sources of OSCS affect the hemostatic properties of heparin.

  3. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  4. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  5. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  6. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  7. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  8. Absorption of visible radiation in atmosphere containing mixtures of absorbing and nonabsorbing particles

    NASA Technical Reports Server (NTRS)

    Ackerman, T. P.; Toon, O. B.

    1981-01-01

    The presence of a strongly absorbing material, tentatively identified as graphitic carbon, or 'soot', is indicated by measurements of single-scattering albedo of tropospheric aerosols. Although theoretical calculations based on models of the ways in which soot may mix with other aerosol materials yield the single-scattering albedo values of 0.6, accounted for by a minimum 20% soot by volume, in urban regions and 0.8, yielded by 1-5% soot by volume, in rural settings, it is found that these same values can be produced by similar amounts of the iron oxide magnetite. Magnetite is shown to be indistinguishable from soot by optical measurements performed on bulk samples, and calculation of various mixtures of soot indicate the difficulty of determining aerosol composition by optical scattering techniques.

  9. Exploring innovative techniques for identifying geochemical elements as fingerprints of sediment sources in an agricultural catchment of Argentina affected by soil erosion.

    PubMed

    Torres Astorga, Romina; de Los Santos Villalobos, Sergio; Velasco, Hugo; Domínguez-Quintero, Olgioly; Pereira Cardoso, Renan; Meigikos Dos Anjos, Roberto; Diawara, Yacouba; Dercon, Gerd; Mabit, Lionel

    2018-05-15

    Identification of hot spots of land degradation is strongly related with the selection of soil tracers for sediment pathways. This research proposes the complementary and integrated application of two analytical techniques to select the most suitable fingerprint tracers for identifying the main sources of sediments in an agricultural catchment located in Central Argentina with erosive loess soils. Diffuse reflectance Fourier transformed in the mid-infrared range (DRIFT-MIR) spectroscopy and energy-dispersive X-ray fluorescence (EDXRF) were used for a suitable fingerprint selection. For using DRIFT-MIR spectroscopy as fingerprinting technique, calibration through quantitative parameters is needed to link and correlate DRIFT-MIR spectra with soil tracers. EDXRF was used in this context for determining the concentrations of geochemical elements in soil samples. The selected tracers were confirmed using two artificial mixtures composed of known proportions of soil collected in different sites with distinctive soil uses. These fingerprint elements were used as parameters to build a predictive model with the whole set of DRIFT-MIR spectra. Fingerprint elements such as phosphorus, iron, calcium, barium, and titanium were identified for obtaining a suitable reconstruction of the source proportions in the artificial mixtures. Mid-infrared spectra produced successful prediction models (R 2  = 0.91) for Fe content and moderate useful prediction (R 2  = 0.72) for Ti content. For Ca, P, and Ba, the R 2 were 0.44, 0.58, and 0.59 respectively.

  10. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  11. An isotherm-based thermodynamic model of multicomponent aqueous solutions, applicable over the entire concentration range.

    PubMed

    Dutcher, Cari S; Ge, Xinlei; Wexler, Anthony S; Clegg, Simon L

    2013-04-18

    In previous studies (Dutcher et al. J. Phys. Chem. C 2011, 115, 16474-16487; 2012, 116, 1850-1864), we derived equations for the Gibbs energy, solvent and solute activities, and solute concentrations in multicomponent liquid mixtures, based upon expressions for adsorption isotherms that include arbitrary numbers of hydration layers on each solute. In this work, the long-range electrostatic interactions that dominate in dilute solutions are added to the Gibbs energy expression, thus extending the range of concentrations for which the model can be used from pure liquid solute(s) to infinite dilution in the solvent, water. An equation for the conversion of the reference state for solute activity coefficients to infinite dilution in water has been derived. A number of simplifications are identified, notably the equivalence of the sorption site parameters r and the stoichiometric coefficients of the solutes, resulting in a reduction in the number of model parameters. Solute concentrations in mixtures conform to a modified Zdanovskii-Stokes-Robinson mixing rule, and solute activity coefficients to a modified McKay-Perring relation, when the effects of the long-range (Debye-Hückel) term in the equations are taken into account. Practical applications of the equations to osmotic and activity coefficients of pure aqueous electrolyte solutions and mixtures show both satisfactory accuracy from low to high concentrations, together with a thermodynamically reasonable extrapolation (beyond the range of measurements) to extreme concentration and to the pure liquid solute(s).

  12. Quantitative characterization of crude oils and fuels in mineral substrates using reflectance spectroscopy: Implications for remote sensing

    NASA Astrophysics Data System (ADS)

    Scafutto, Rebecca Del'Papa Moreira; Souza Filho, Carlos Roberto de

    2016-08-01

    The near and shortwave infrared spectral reflectance properties of several mineral substrates impregnated with crude oils (°APIs 19.2, 27.5 and 43.2), diesel, gasoline and ethanol were measured and assembled in a spectral library. These data were examined using Principal Component Analysis (PCA) and Partial Least Squares (PLS) Regression. Unique and characteristic absorption features were identified in the mixtures, besides variations of the spectral signatures related to the compositional difference of the crude oils and fuels. These features were used for qualitative and quantitative determination of the contaminant impregnated in the substrates. Specific wavelengths, where key absorption bands occur, were used for the individual characterization of oils and fuels. The intensity of these features can be correlated to the abundance of the contaminant in the mixtures. Grain size and composition of the impregnated substrate directly influence the variation of the spectral signatures. PCA models applied to the spectral library proved able to differentiate the type and density of the hydrocarbons. The calibration models generated by PLS are robust, of high quality and can also be used to predict the concentration of oils and fuels in mixtures with mineral substrates. Such data and models are employable as a reference for classifying unknown samples of contaminated substrates. The results of this study have important implications for onshore exploration and environmental monitoring of oil and fuels leaks using proximal and far range multispectral, hyperspectral and ultraespectral remote sensing.

  13. Sentinel Node Biopsy for the Head and Neck Using Contrast-Enhanced Ultrasonography Combined with Indocyanine Green Fluorescence in Animal Models: A Feasibility Study.

    PubMed

    Kogashiwa, Yasunao; Sakurai, Hiroyuki; Akimoto, Yoshihiro; Sato, Dai; Ikeda, Tetsuya; Matsumoto, Yoshifumi; Moro, Yorihisa; Kimura, Toru; Hamanoue, Yasuhiro; Nakamura, Takehiro; Yamauchi, Koichi; Saito, Koichiro; Sugasawa, Masashi; Kohno, Naoyuki

    2015-01-01

    Sentinel node navigation surgery is gaining popularity in oral cancer. We assessed application of sentinel lymph node navigation surgery to pharyngeal and laryngeal cancers by evaluating the combination of contrast-enhanced ultrasonography and indocyanine green fluorescence in animal models. This was a prospective, nonrandomized, experimental study in rabbit and swine animal models. A mixture of indocyanine green and Sonazoid was used as the tracer. The tracer mixture was injected into the tongue, larynx, or pharynx. The sentinel lymph nodes were identified transcutaneously by infra-red camera and contrast-enhanced ultrasonography. Detection time and extraction time of the sentinel lymph nodes were measured. The safety of the tracer mixture in terms of mucosal reaction was evaluated macroscopically and microscopically. Sentinel lymph nodes were detected transcutaneously by contrast-enhanced ultrasonography alone. The number of sentinel lymph nodes detected was one or two. Despite observation of contrast enhancement of Sonazoid for at least 90 minutes, the number of sentinel lymph nodes detected did not change. The average extraction time of sentinel lymph nodes was 4.8 minutes. Indocyanine green fluorescence offered visual information during lymph node biopsy. The safety of the tracer was confirmed by absence of laryngeal edema both macro and microscopically. The combination method of indocyanine green fluorescence and contrast-enhanced ultrasonography for detecting sentinel lymph nodes during surgery for head and neck cancer seems promising, especially for pharyngeal and laryngeal cancer. Further clinical studies to confirm this are warranted.

  14. Leveraging Genomic Annotations and Pleiotropic Enrichment for Improved Replication Rates in Schizophrenia GWAS

    PubMed Central

    Wang, Yunpeng; Thompson, Wesley K.; Schork, Andrew J.; Holland, Dominic; Chen, Chi-Hua; Bettella, Francesco; Desikan, Rahul S.; Li, Wen; Witoelar, Aree; Zuber, Verena; Devor, Anna; Nöthen, Markus M.; Rietschel, Marcella; Chen, Qiang; Werge, Thomas; Cichon, Sven; Weinberger, Daniel R.; Djurovic, Srdjan; O’Donovan, Michael; Visscher, Peter M.; Andreassen, Ole A.; Dale, Anders M.

    2016-01-01

    Most of the genetic architecture of schizophrenia (SCZ) has not yet been identified. Here, we apply a novel statistical algorithm called Covariate-Modulated Mixture Modeling (CM3), which incorporates auxiliary information (heterozygosity, total linkage disequilibrium, genomic annotations, pleiotropy) for each single nucleotide polymorphism (SNP) to enable more accurate estimation of replication probabilities, conditional on the observed test statistic (“z-score”) of the SNP. We use a multiple logistic regression on z-scores to combine information from auxiliary information to derive a “relative enrichment score” for each SNP. For each stratum of these relative enrichment scores, we obtain nonparametric estimates of posterior expected test statistics and replication probabilities as a function of discovery z-scores, using a resampling-based approach that repeatedly and randomly partitions meta-analysis sub-studies into training and replication samples. We fit a scale mixture of two Gaussians model to each stratum, obtaining parameter estimates that minimize the sum of squared differences of the scale-mixture model with the stratified nonparametric estimates. We apply this approach to the recent genome-wide association study (GWAS) of SCZ (n = 82,315), obtaining a good fit between the model-based and observed effect sizes and replication probabilities. We observed that SNPs with low enrichment scores replicate with a lower probability than SNPs with high enrichment scores even when both they are genome-wide significant (p < 5x10-8). There were 693 and 219 independent loci with model-based replication rates ≥80% and ≥90%, respectively. Compared to analyses not incorporating relative enrichment scores, CM3 increased out-of-sample yield for SNPs that replicate at a given rate. This demonstrates that replication probabilities can be more accurately estimated using prior enrichment information with CM3. PMID:26808560

  15. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  16. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Density-based clustering analyses to identify heterogeneous cellular sub-populations

    NASA Astrophysics Data System (ADS)

    Heaster, Tiffany M.; Walsh, Alex J.; Landman, Bennett A.; Skala, Melissa C.

    2017-02-01

    Autofluorescence microscopy of NAD(P)H and FAD provides functional metabolic measurements at the single-cell level. Here, density-based clustering algorithms were applied to metabolic autofluorescence measurements to identify cell-level heterogeneity in tumor cell cultures. The performance of the density-based clustering algorithm, DENCLUE, was tested in samples with known heterogeneity (co-cultures of breast carcinoma lines). DENCLUE was found to better represent the distribution of cell clusters compared to Gaussian mixture modeling. Overall, DENCLUE is a promising approach to quantify cell-level heterogeneity, and could be used to understand single cell population dynamics in cancer progression and treatment.

  18. A novel expert system for objective masticatory efficiency assessment

    PubMed Central

    2018-01-01

    Most of the tools and diagnosis models of Masticatory Efficiency (ME) are not well documented or severely limited to simple image processing approaches. This study presents a novel expert system for ME assessment based on automatic recognition of mixture patterns of masticated two-coloured chewing gums using a combination of computational intelligence and image processing techniques. The hypotheses tested were that the proposed system could accurately relate specimens to the number of chewing cycles, and that it could identify differences between the mixture patterns of edentulous individuals prior and after complete denture treatment. This study enrolled 80 fully-dentate adults (41 females and 39 males, 25 ± 5 years of age) as the reference population; and 40 edentulous adults (21 females and 19 males, 72 ± 8.9 years of age) for the testing group. The system was calibrated using the features extracted from 400 samples covering 0, 10, 15, and 20 chewing cycles. The calibrated system was used to automatically analyse and classify a set of 160 specimens retrieved from individuals in the testing group in two appointments. The ME was then computed as the predicted number of chewing strokes that a healthy reference individual would need to achieve a similar degree of mixture measured against the real number of cycles applied to the specimen. The trained classifier obtained a Mathews Correlation Coefficient score of 0.97. ME measurements showed almost perfect agreement considering pre- and post-treatment appointments separately (κ ≥ 0.95). Wilcoxon signed-rank test showed that a complete denture treatment for edentulous patients elicited a statistically significant increase in the ME measurements (Z = -2.31, p < 0.01). We conclude that the proposed expert system proved able and reliable to accurately identify patterns in mixture and provided useful ME measurements. PMID:29385165

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  20. Comparison of Bovine Bone-Autogenic Bone Mixture Versus Platelet-Rich Fibrin for Maxillary Sinus Grafting: Histologic and Histomorphologic Study.

    PubMed

    Ocak, Hakan; Kutuk, Nukhet; Demetoglu, Umut; Balcıoglu, Esra; Ozdamar, Saim; Alkan, Alper

    2017-06-01

    Numerous grafting materials have been used to augment the maxillary sinus floor for long-term stability and success for implant-supported prosthesis. To enhance bone formation, adjunctive blood-born growth factor sources have gained popularity during the recent years. The present study compared the use of platelet-rich fibrin (PRF) and bovine-autogenous bone mixture for maxillary sinus floor elevation. A split-face model was used to apply 2 different filling materials for maxillary sinus floor elevation in 22 healthy adult sheep. In group 1, bovine and autogenous bone mixture; and in group 2, PRF was used. The animals were killed at 3, 6, and 9 months. Histologic and histomorphologic examinations revealed new bone formation in group 1 at the third and sixth months. In group 2, new bone formation was observed only at the sixth month, and residual PRF remnants were identified. At the ninth month, host bone and new bone could not be distinguished from each other in group 1, and bone formation was found to be proceeding in group 2. PRF remnants still existed at the ninth month. In conclusion, bovine bone and autogenous bone mixture is superior to PRF as a grafting material in sinus-lifting procedures.

  1. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  2. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  3. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  4. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  5. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  6. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  7. Dehydration of Methylcyclohexanol Isomers in the Undergraduate Organic Laboratory and Product Analysis by Gas Chromatography-Mass Spectroscopy (GC-MS)

    ERIC Educational Resources Information Center

    Clennan, Malgorzata M.; Clennan, Edward L.

    2011-01-01

    Dehydrations of "cis"- and "trans"-2-methylcyclohexanol mixtures were carried out with 60% sulfuric acid at 78-80 [degrees]C as a function of time and the products were identified by gas chromatography-mass spectroscopy (GC-MS) analysis. The compounds identified in the reaction mixtures include alkenes, 1-, 3-, and 4-methylcyclohexenes and…

  8. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  9. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  11. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  12. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less

  13. Metabolic Engineering for Substrate Co-utilization

    NASA Astrophysics Data System (ADS)

    Gawand, Pratish

    Production of biofuels and bio-based chemicals is being increasingly pursued by chemical industry to reduce its dependence on petroleum. Lignocellulosic biomass (LCB) is an abundant source of sugars that can be used for producing biofuels and bio-based chemicals using fermentation. Hydrolysis of LCB results in a mixture of sugars mainly composed of glucose and xylose. Fermentation of such a sugar mixture presents multiple technical challenges at industrial scale. Most industrial microorganisms utilize sugars in a sequential manner due to the regulatory phenomenon of carbon catabolite repression (CCR). Due to sequential utilization of sugars, the LCB-based fermentation processes suffer low productivities and complicated operation. Performance of fermentation processes can be improved by metabolic engineering of microorganisms to obtain superior characteristics such as high product yield. With increased computational power and availability of complete genomes of microorganisms, use of model-based metabolic engineering is now a common practice. The problem of sequential sugar utilization, however, is a regulatory problem, and metabolic models have never been used to solve such regulatory problems. The focus of this thesis is to use model-guided metabolic engineering to construct industrial strains capable of co-utilizing sugars. First, we develop a novel bilevel optimization algorithm SimUp, that uses metabolic models to identify reaction deletion strategies to force co-utilization of two sugars. We then use SimUp to identify reaction deletion strategies to force glucose-xylose co-utilization in Escherichia coli. To validate SimUp predictions, we construct three mutants with multiple gene knockouts and test them for glucose-xylose utilization characteristics. Two mutants, designated as LMSE2 and LMSE5, are shown to co-utilize glucose and xylose in agreement with SimUp predictions. To understand the molecular mechanism involved in glucose-xylose co-utilization of the mutant LMSE2, the mutant is subjected to targeted and whole genome sequencing. Finally, we use the mutant LMSE2 to produce D-ribose from a mixture of glucose and xylose by overexpressing an endogenous phosphatase. The methods developed in this thesis are anticipated to provide a novel approach to solve sugar co-utilization problem in industrial microorganisms, and provide insights into microbial response to forced co-utilization of sugars.

  14. Solving Coupled Gross--Pitaevskii Equations on a Cluster of PlayStation 3 Computers

    NASA Astrophysics Data System (ADS)

    Edwards, Mark; Heward, Jeffrey; Clark, C. W.

    2009-05-01

    At Georgia Southern University we have constructed an 8+1--node cluster of Sony PlayStation 3 (PS3) computers with the intention of using this computing resource to solve problems related to the behavior of ultra--cold atoms in general with a particular emphasis on studying bose--bose and bose--fermi mixtures confined in optical lattices. As a first project that uses this computing resource, we have implemented a parallel solver of the coupled time--dependent, one--dimensional Gross--Pitaevskii (TDGP) equations. These equations govern the behavior of dual-- species bosonic mixtures. We chose the split--operator/FFT to solve the coupled 1D TDGP equations. The fast Fourier transform component of this solver can be readily parallelized on the PS3 cpu known as the Cell Broadband Engine (CellBE). Each CellBE chip contains a single 64--bit PowerPC Processor Element known as the PPE and eight ``Synergistic Processor Element'' identified as the SPE's. We report on this algorithm and compare its performance to a non--parallel solver as applied to modeling evaporative cooling in dual--species bosonic mixtures.

  15. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  17. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  18. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data.

    PubMed

    van Maanen, Leendert; Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of "mixtures" has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied-for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes.

  19. Theoretical models for ice mixtures in outer solar system bodies

    NASA Astrophysics Data System (ADS)

    Escribano, R. M.; Gómez, P. C.; Molpeceres, G.; Timón, V.; Moreno, M. A.; Maté, B.

    2016-12-01

    In a recent work [1], we have measured the optical constants and band strengths of methane/ethane ice mixtures in the near- and mid-infrared ranges. We present here recent calculations on models for these and other ice mixtures containing water. Methane and ethane are constituents of planetary ices in our solar system. Methane has been detected in outer solar system bodies like Titan, Pluto, Charon, Triton, or other TNO's [2]. Ethane has also been identified in some of those objects [2]. The motivation of this work has been to provide new laboratory data and theoretical models that may contribute to the understanding of those systems, in the new era of TNO's knowledge opened up by the New Horizons mission [3,4]. The models are designed to cover a range of mixtures of molecular species that match the composition and density of some of the systems in outer solar systems bodies. The calculations include several steps: first, amorphous samples are generated, via a Metropolis Montecarlo procedure (see Figure, left); next, the amorphous structures are relaxed to reach a minimum in the potential energy surface; at this point, the harmonic vibrational spectrum is predicted. Finally, the relaxed structures are processed by ab initio molecular dynamics simulations with the final aim of obtaining an anharmonic prediction of the spectra, which includes the near-infrared region (see Figure, right). Both the harmonic and anharmonic spectra are compared to experimental measurements in the mid- and near-infrared regions. All calculations are carried out by means of Materials Studio software, using the Density Functional Theory method, with GGA-PBE functionals and Grimme D2 dispersion correction. Acknowledgements This research has been supported by the Spanish MINECO, Projects FIS2013-48087-C2-1-P. G.M. acknowledges MINECO PhD grant BES-2014-069355. We are grateful to V. J. Herrero and I. Tanarro for discussions. References [1] G. Molpeceres et al., Astrophys. J, accepted (2016). [2] D.P. Cruikshank et al., Icarus, 246, 82-92, 2015. [3] A. Stern et al., Science, 350, 260-292, 2015. [4] W.M. Grundy et al. Science, 351, 1283, 2016. Figure caption Left: Representation of an amorphous mixture with 1 methane and 3 water molecules; right: spectra predicted in the near-IR from a molecular dynamics calculation.

  20. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  1. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  2. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  3. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  4. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  5. An Approach for Peptide Identification by De Novo Sequencing of Mixture Spectra.

    PubMed

    Liu, Yi; Ma, Bin; Zhang, Kaizhong; Lajoie, Gilles

    2017-01-01

    Mixture spectra occur quite frequently in a typical wet-lab mass spectrometry experiment, which result from the concurrent fragmentation of multiple precursors. The ability to efficiently and confidently identify mixture spectra is essential to alleviate the existent bottleneck of low mass spectra identification rate. However, most of the traditional computational methods are not suitable for interpreting mixture spectra, because they still take the assumption that the acquired spectra come from the fragmentation of a single precursor. In this manuscript, we formulate the mixture spectra de novo sequencing problem mathematically, and propose a dynamic programming algorithm for the problem. Additionally, we use both simulated and real mixture spectra data sets to verify the merits of the proposed algorithm.

  6. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  7. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  8. High-performance liquid chromatography/high-resolution multiple stage tandem mass spectrometry using negative-ion-mode hydroxide-doped electrospray ionization for the characterization of lignin degradation products.

    PubMed

    Owen, Benjamin C; Haupert, Laura J; Jarrell, Tiffany M; Marcum, Christopher L; Parsell, Trenton H; Abu-Omar, Mahdi M; Bozell, Joseph J; Black, Stuart K; Kenttämaa, Hilkka I

    2012-07-17

    In the search for a replacement for fossil fuel and the valuable chemicals currently obtained from crude oil, lignocellulosic biomass has become a promising candidate as an alternative biorenewable source for crude oil. Hence, many research efforts focus on the extraction, degradation, and catalytic transformation of lignin, hemicellulose, and cellulose. Unfortunately, these processes result in the production of very complex mixtures. Further, while methods have been developed for the analysis of mixtures of oligosaccharides, this is not true for the complex mixtures generated upon degradation of lignin. For example, high-performance liquid chromatography/multiple stage tandem mass spectrometry (HPLC/MS(n)), a tool proven to be invaluable in the analysis of complex mixtures derived from many other biopolymers, such as proteins and DNA, has not been implemented for lignin degradation products. In this study, we have developed an HPLC separation method for lignin degradation products that is amenable to negative-ion-mode electrospray ionization (ESI doped with NaOH), the best method identified thus far for ionization of lignin-related model compounds without fragmentation. The separated and ionized compounds are then analyzed by MS(3) experiments to obtain detailed structural information while simultaneously performing high-resolution measurements to determine their elemental compositions in the two parts of a commercial linear quadrupole ion trap/Fourier-transform ion cyclotron resonance mass spectrometer. A lignin degradation product mixture was analyzed using this method, and molecular structures were proposed for some components. This methodology significantly improves the ability to analyze complex product mixtures that result from degraded lignin.

  9. Exploring 2.5-Year Trajectories of Functional Decline in Older Adults by Applying a Growth Mixture Model and Frequency of Outings as a Predictor: A 2010-2013 JAGES Longitudinal Study.

    PubMed

    Saito, Junko; Kondo, Naoki; Saito, Masashige; Takagi, Daisuke; Tani, Yukako; Haseda, Maho; Tabuchi, Takahiro; Kondo, Katsunori

    2018-06-23

    We explored the distinct trajectories of functional decline among older adults in Japan, and evaluated whether the frequency of outings, an important indicator of social activity, predicts the identified trajectories. We analyzed data on 2,364 adults aged 65 years or older from the Japan Aichi Gerontological Evaluation Study. Participants were initially independent and later developed functional disability during a 31-month follow-up period. We used the level of long-term care needs certified in the public health insurance system as a proxy of functional ability and linked the fully tracked data of changes in the care levels to the baseline data. A low frequency of outings was defined as leaving one's home less than once per week at baseline. We applied a growth mixture model to identify trajectories in functional decline by sex and then examined the association between the frequency of outings and the identified trajectories using multinomial logistic regression analysis. Three distinct trajectories were identified: "slowly declining" (64.3% of men and 79.7% of women), "persistently disabled" (4.5% and 3.7%, respectively), and "rapidly declining" (31.3% and 16.6%, respectively). Men with fewer outings had 2.14 times greater odds (95% confidence interval, 1.03-4.41) of being persistently disabled. The association between outing frequency and functional decline trajectory was less clear statistically among women. While the majority of older adults showed a slow functional decline, some showed persistent moderate disability. Providing more opportunities to go out or assistance in that regard may be important for preventing persistent disability, and such needs might be greater among men.

  10. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  11. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  12. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  13. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  14. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Exposure to pesticide mixtures and DNA damage among rice field workers.

    PubMed

    Varona-Uribe, Marcela Eugenia; Torres-Rey, Carlos H; Díaz-Criollo, Sonia; Palma-Parra, Ruth Marien; Narváez, Diana María; Carmona, Sandra Patricia; Briceño, Leonardo; Idrovo, Alvaro J

    2016-01-01

    This study describes the use of pesticides mixtures and their potential association with comet assay results in 223 rice field workers in Colombia. Thirty-one pesticides were quantified in blood, serum, and urine (15 organochlorines, 10 organophosphorus, 5 carbamates, and ethylenethiourea), and the comet assay was performed. Twenty-four (77.42%) pesticides were present in the workers. The use of the maximum-likelihood factor analysis identified 8 different mixtures. Afterwards, robust regressions were used to explore associations between the factors identified and the comet assay. Two groups of mixtures--α-benzene hexachloride (α-BHC), hexachlorobenzene (HCB), and β-BHC (β: 1.21, 95% confidence interval [CI]: 0.33-2.10) and pirimiphos-methyl, malathion, bromophos-methyl, and bromophos-ethyl (β: 11.97, 95% CI: 2.34-21.60)--were associated with a higher percentage of DNA damage and comet tail length, respectively. The findings suggest that exposure to pesticides varies greatly among rice field workers.

  16. Impact of chemical proportions on the acute neurotoxicity of a mixture of seven carbamates in preweanling and adult rats.

    PubMed

    Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C

    2012-09-01

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.

  17. Continuous flow immobilized enzyme reactor-tandem mass spectrometry for screening of AChE inhibitors in complex mixtures.

    PubMed

    Forsberg, Erica M; Green, James R A; Brennan, John D

    2011-07-01

    A method is described for identifying bioactive compounds in complex mixtures based on the use of capillary-scale monolithic enzyme-reactor columns for rapid screening of enzyme activity. A two-channel nanoLC system was used to continuously infuse substrate coupled with automated injections of substrate/small molecule mixtures, optionally containing the chromogenic Ellman reagent, through sol-gel derived acetylcholinesterase (AChE) doped monolithic columns. This is the first report of AChE encapsulated in monolithic silica for use as an immobilized enzyme reactor (IMER), and the first use of such IMERs for mixture screening. AChE IMER columns were optimized to allow rapid functional screening of compound mixtures based on changes in the product absorbance or the ratio of mass spectrometric peaks for product and substrate ions in the eluent. The assay had robust performance and produced a Z' factor of 0.77 in the presence of 2% (v/v) DMSO. A series of 52 mixtures consisting of 1040 compounds from the Canadian Compound Collection of bioactives was screened and two known inhibitors, physostigmine and 9-aminoacridine, were identified from active mixtures by manual deconvolution. The activity of the compounds was confirmed using the enzyme reactor format, which allowed determination of both IC(50) and K(I) values. Screening results were found to correlate well with a recently published fluorescence-based microarray screening assay for AChE inhibitors.

  18. MUSIC-Expected maximization gaussian mixture methodology for clustering and detection of task-related neuronal firing rates.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2017-01-15

    Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  20. Neural processing, perception, and behavioral responses to natural chemical stimuli by fish and crustaceans.

    PubMed

    Derby, Charles D; Sorensen, Peter W

    2008-07-01

    This manuscript reviews the chemical ecology of two of the major aquatic animal models, fish and crustaceans, in the study of chemoreception. By necessity, it is restricted in scope, with most emphasis placed on teleost fish and decapod crustaceans. First, we describe the nature of the chemical world perceived by fish and crustaceans, giving examples of the abilities of these animals to analyze complex natural odors. Fish and crustaceans share the same environments and have evolved some similar chemosensory features: the ability to detect and discern mixtures of small metabolites in highly variable backgrounds and to use this information to identify food, mates, predators, and habitat. Next, we give examples of the molecular nature of some of these natural products, including a description of methodologies used to identify them. Both fish and crustaceans use their olfactory and gustatory systems to detect amino acids, amines, and nucleotides, among many other compounds, while fish olfactory systems also detect mixtures of sex steroids and prostaglandins with high specificity and sensitivity. Third, we discuss the importance of plasticity in chemical sensing by fish and crustaceans. Finally, we conclude with a description of how natural chemical stimuli are processed by chemosensory systems. In both fishes and crustaceans, the olfactory system is especially adept at mixture discrimination, while gustation is well suited to facilitate precise localization and ingestion of food. The behaviors of both fish and crustaceans can be defined by the chemical worlds in which they live and the abilities of their nervous systems to detect and identify specific features in their domains. An understanding of these worlds and the sensory systems that provide the animals with information about them provides insight into the chemical ecology of these species.

  1. Dielectric relaxation and hydrogen bonding interaction in xylitol-water mixtures using time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Rander, D. N.; Joshi, Y. S.; Kanse, K. S.; Kumbharkhane, A. C.

    2016-01-01

    The measurements of complex dielectric permittivity of xylitol-water mixtures have been carried out in the frequency range of 10 MHz-30 GHz using a time domain reflectometry technique. Measurements have been done at six temperatures from 0 to 25 °C and at different weight fractions of xylitol (0 < W X ≤ 0.7) in water. There are different models to explain the dielectric relaxation behaviour of binary mixtures, such as Debye, Cole-Cole or Cole-Davidson model. We have observed that the dielectric relaxation behaviour of binary mixtures of xylitol-water can be well described by Cole-Davidson model having an asymmetric distribution of relaxation times. The dielectric parameters such as static dielectric constant and relaxation time for the mixtures have been evaluated. The molecular interaction between xylitol and water molecules is discussed using the Kirkwood correlation factor ( g eff ) and thermodynamic parameter.

  2. Estimating the incidence of rotavirus infection in children from India and Malawi from serial anti-rotavirus IgA titres.

    PubMed

    Bennett, Aisleen; Nagelkerke, Nico; Heinsbroek, Ellen; Premkumar, Prasanna S; Wnęk, Małgorzata; Kang, Gagandeep; French, Neil; Cunliffe, Nigel A; Bar-Zeev, Naor; Lopman, Ben; Iturriza-Gomara, Miren

    2017-01-01

    Accurate estimates of rotavirus incidence in infants are crucial given disparities in rotavirus vaccine effectiveness from low-income settings. Sero-surveys are a pragmatic means of estimating incidence however serological data is prone to misclassification. This study used mixture models to estimate incidence of rotavirus infection from anti-rotavirus immunoglobulin A (IgA) titres in infants from Vellore, India, and Karonga, Malawi. IgA titres were measured using serum samples collected at 6 month intervals for 36 months from 373 infants from Vellore and 12 months from 66 infants from Karonga. Mixture models (two component Gaussian mixture distributions) were fit to the difference in titres between time points to estimate risk of sero-positivity and derive incidence estimates. A peak incidence of 1.05(95% confidence interval [CI]: 0.64, 1.64) infections per child-year was observed in the first 6 months of life in Vellore. This declined incrementally with each subsequent time interval. Contrastingly in Karonga incidence was greatest in the second 6 months of life (1.41 infections per child year [95% CI: 0.79, 2.29]). This study demonstrates that infants from Vellore experience peak rotavirus incidence earlier than those from Karonga. Identifying such differences in transmission patterns is important in informing vaccine strategy, particularly where vaccine effectiveness is modest.

  3. Estimating the incidence of rotavirus infection in children from India and Malawi from serial anti-rotavirus IgA titres

    PubMed Central

    Nagelkerke, Nico; Heinsbroek, Ellen; Premkumar, Prasanna S.; Wnęk, Małgorzata; Kang, Gagandeep; French, Neil; Cunliffe, Nigel A.; Bar-Zeev, Naor

    2017-01-01

    Accurate estimates of rotavirus incidence in infants are crucial given disparities in rotavirus vaccine effectiveness from low-income settings. Sero-surveys are a pragmatic means of estimating incidence however serological data is prone to misclassification. This study used mixture models to estimate incidence of rotavirus infection from anti-rotavirus immunoglobulin A (IgA) titres in infants from Vellore, India, and Karonga, Malawi. IgA titres were measured using serum samples collected at 6 month intervals for 36 months from 373 infants from Vellore and 12 months from 66 infants from Karonga. Mixture models (two component Gaussian mixture distributions) were fit to the difference in titres between time points to estimate risk of sero-positivity and derive incidence estimates. A peak incidence of 1.05(95% confidence interval [CI]: 0.64, 1.64) infections per child-year was observed in the first 6 months of life in Vellore. This declined incrementally with each subsequent time interval. Contrastingly in Karonga incidence was greatest in the second 6 months of life (1.41 infections per child year [95% CI: 0.79, 2.29]). This study demonstrates that infants from Vellore experience peak rotavirus incidence earlier than those from Karonga. Identifying such differences in transmission patterns is important in informing vaccine strategy, particularly where vaccine effectiveness is modest. PMID:29287122

  4. Modeling Carbon-Black/Polymer Composite Sensors

    PubMed Central

    Lei, Hua; Pitt, William G.; McGrath, Lucas K.; Ho, Clifford K.

    2012-01-01

    Conductive polymer composite sensors have shown great potential in identifying gaseous analytes. To more thoroughly understand the physical and chemical mechanisms of this type of sensor, a mathematical model was developed by combining two sub-models: a conductivity model and a thermodynamic model, which gives a relationship between the vapor concentration of analyte(s) and the change of the sensor signals. In this work, 64 chemiresistors representing eight different carbon concentrations (8–60 vol% carbon) were constructed by depositing thin films of a carbon-black/polyisobutylene composite onto concentric spiral platinum electrodes on a silicon chip. The responses of the sensors were measured in dry air and at various vapor pressures of toluene and trichloroethylene. Three parameters in the conductivity model were determined by fitting the experimental data. It was shown that by applying this model, the sensor responses can be adequately predicted for given vapor pressures; furthermore the analyte vapor concentrations can be estimated based on the sensor responses. This model will guide the improvement of the design and fabrication of conductive polymer composite sensors for detecting and identifying mixtures of organic vapors. PMID:22518071

  5. Catalytic combustion of hydrogen-air mixtures in stagnation flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, H.; Libby, P.A.; Williams, F.A.

    1993-04-01

    The interaction between heterogeneous and homogeneous reactions arising when a mixture of hydrogen and air impinges on a platinum plate at elevated temperature is studied. A reasonably complete description of the kinetic mechanism for homogeneous reactions is employed along with a simplified model for heterogeneous reactions. Four regimes are identified depending on the temperature of the plate, on the rate of strain imposed on the flow adjacent to the plate and on the composition and temperature of the reactant stream: (1) surface reaction alone; (2) surface reaction inhibiting homogeneous reaction; (3) homogeneous reaction inhibiting surface reaction; and (4) homogeneous reactionmore » alone. These regimes are related to those found earlier for other chemical systems and form the basis of future experimental investigation of the chemical system considered in the present study.« less

  6. Broad Feshbach resonance in the 6Li-40K mixture.

    PubMed

    Tiecke, T G; Goosen, M R; Ludewig, A; Gensemer, S D; Kraft, S; Kokkelmans, S J J M F; Walraven, J T M

    2010-02-05

    We study the widths of interspecies Feshbach resonances in a mixture of the fermionic quantum gases 6Li and 40K. We develop a model to calculate the width and position of all available Feshbach resonances for a system. Using the model, we select the optimal resonance to study the {6}Li/{40}K mixture. Experimentally, we obtain the asymmetric Fano line shape of the interspecies elastic cross section by measuring the distillation rate of 6Li atoms from a potassium-rich 6Li/{40}K mixture as a function of magnetic field. This provides us with the first experimental determination of the width of a resonance in this mixture, DeltaB=1.5(5) G. Our results offer good perspectives for the observation of universal crossover physics using this mass-imbalanced fermionic mixture.

  7. Linked gas chromatograph-thermal energy analyzer/ion trap mass spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alcaraz, A.; Martin, W.H.; Andresen, B.D.

    1991-05-01

    The capability of comparing a nitrogen chromatogram generated from a gas chromatograph (GC, Varian model 3400) linked to a thermal energy analyzer (TEA, Thermedics Inc. Model 610) with a total ion chromatogram (from a Finnigan-MAT Ion Trap Mass Spectrometer, ITMS) has provided a new means to screen and identifying trace levels of nitrogen-containing compounds in complex mixtures. Prior to the work described here, it has not been possible to simultaneously acquire TEA and MS data. What was needed was a viable GC-TEA/ITMS interface to combine the capabilities of both instruments. 4 figs.

  8. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  9. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields.

    PubMed

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2016-05-01

    To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.

  10. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  12. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  13. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  14. Zebrafish seizure model identifies p,p -DDE as the dominant contaminant of fetal California sea lions that accounts for synergistic activity with domoic acid.

    PubMed

    Tiedeken, Jessica A; Ramsdell, John S

    2010-04-01

    Fetal poisoning of California sea lions (CSLs; Zalophus californianus) has been associated with exposure to the algal toxin domoic acid. These same sea lions accumulate a mixture of persistent environmental contaminants including pesticides and industrial products such as polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs). Developmental exposure to the pesticide dichlorodiphenyltrichloroethane (DDT) and its stable metabolite 1,1-bis-(4-chlorophenyl)-2,2-dichloroethene (p,p -DDE) has been shown to enhance domoic acid-induced seizures in zebrafish; however, the contribution of other co-occurring contaminants is unknown. We formulated a mixture of contaminants to include PCBs, PBDEs, hexachlorocyclohexane (HCH), and chlordane at levels matching those reported for fetal CSL blubber to determine the impact of co-occurring persistent contaminants with p,p -DDE on chemically induced seizures in zebrafish as a model for the CSLs. Embryos were exposed (6-30 hr postfertilization) to p,p -DDE in the presence or absence of a defined contaminant mixture prior to neurodevelopment via either bath exposure or embryo yolk sac microinjection. After brain maturation (7 days postfertilization), fish were exposed to a chemical convulsant, either pentylenetetrazole or domoic acid; resulting seizure behavior was then monitored and analyzed for changes, using cameras and behavioral tracking software. Induced seizure behavior did not differ significantly between subjects with embryonic exposure to a contaminant mixture and those exposed to p,p -DDE only. These studies demonstrate that p,p -DDE--in the absence of PCBs, HCH, chlordane, and PBDEs that co-occur in fetal sea lions--accounts for the synergistic activity that leads to greater sensitivity to domoic acid seizures.

  15. Selective Encaging of N2O in N2O-N2 Binary Gas Hydrates via Hydrate-Based Gas Separation.

    PubMed

    Yang, Youjeong; Shin, Donghoon; Choi, Seunghyun; Woo, Yesol; Lee, Jong-Won; Kim, Dongseon; Shin, Hee-Young; Cha, Minjun; Yoon, Ji-Ho

    2017-03-21

    The crystal structure and guest inclusion behaviors of nitrous oxide-nitrogen (N 2 O-N 2 ) binary gas hydrates formed from N 2 O/N 2 gas mixtures are determined through spectroscopic analysis. Powder X-ray diffraction results indicate that the crystal structure of all the N 2 O-N 2 binary gas hydrates is identified as the structure I (sI) hydrate. Raman spectra for the N 2 O-N 2 binary gas hydrate formed from N 2 O/N 2 (80/20, 60/40, 40/60 mol %) gas mixtures reveal that N 2 O molecules occupy both large and small cages of the sI hydrate. In contrast, there is a single Raman band of N 2 O molecules for the N 2 O-N 2 binary gas hydrate formed from the N 2 O/N 2 (20/80 mol %) gas mixture, indicating that N 2 O molecules are trapped in only large cages of the sI hydrate. From temperature-dependent Raman spectra and the Predictive Soave-Redlich-Kwong (PSRK) model calculation, we confirm the self-preservation of N 2 O-N 2 binary gas hydrates in the temperature range of 210-270 K. Both the experimental measurements and the PSRK model calculations demonstrate the preferential occupation of N 2 O molecules rather than N 2 molecules in the hydrate cages, leading to a possible process for separating N 2 O from gas mixtures via hydrate formation. The phase equilibrium conditions, pseudo-pressure-composition (P-x) diagram, and gas storage capacity of N 2 O-N 2 binary gas hydrates are discussed in detail.

  16. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  17. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  18. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  19. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  20. Verifying Sediment Fingerprinting Results with Known Mixtures

    NASA Astrophysics Data System (ADS)

    Gellis, A.; Gorman-Sanisaca, L.; Cashman, M. J.

    2017-12-01

    Sediment fingerprinting is a widely used approach to determine the specific sources of fluvial sediment within a watershed. It relies on the principle that potential sediment sources can be identified using a set of chemical tracers (or fingerprints), and comparison of these source fingerprints with fluvial (target) sediment allows for source apportionment of the fluvial sediment. There are numerous source classifications, fingerprints, and statistical approaches used in the literature to apportion sources of sediment. However, few of these studies have sought to test the method by creating controls on the ratio of sources in the target sediment. Without a controlled environment for inputs and outputs, such verification of results is ambiguous. Here, we generated artificial mixtures of source sediment from an agricultural/forested watershed in Virginia, USA (Smith Creek, 246 km2) to verify the apportionment results. Target samples were established from known mixtures of the four major sediment sources in the watershed (forest, pasture, cropland, and streambanks). The target samples were sieved to less than 63 microns and analyzed for elemental and isotopic chemistry. The target samples and source samples were run through the Sediment Source Assessment Tool (Sed_SAT) to verify if the statistical operations provided the correct apportionment. Sed_SAT uses a multivariate parametric approach to identify the minimum suite of fingerprints that discriminate the source areas and applies these fingerprints through an unmixng model to apportion sediment. The results of this sediment fingerprinting verification experiment will be presented in this session.

  1. Proportioning and performance evaluation of self-consolidating concrete

    NASA Astrophysics Data System (ADS)

    Wang, Xuhao

    A well-proportioned self-consolidating concrete (SCC) mixture can be achieved by controlling the aggregate system, paste quality, and paste quantity. The work presented in this dissertation involves an effort to study and improve particle packing of the concrete system and reduce the paste quantity while maintaining concrete quality and performance. This dissertation is composed of four papers resulting from the study: (1) Assessing Particle Packing Based Self-Consolidating Concrete Mix Design; (2) Using Paste-To-Voids Volume Ratio to Evaluate the Performance of Self-Consolidating Concrete Mixtures; (3) Image Analysis Applications on Assessing Static Stability and Flowability of Self-Consolidating Concrete, and (4) Using Ultrasonic Wave Propagation to Monitor Stiffening Process of Self-Consolidating Concrete. Tests were conducted on a large matrix of SCC mixtures that were designed for cast-in-place bridge construction. The mixtures were made with different aggregate types, sizes, and different cementitious materials. In Paper 1, a modified particle-packing based mix design method, originally proposed by Brouwers (2005), was applied to the design of self-consolidating concrete (SCC) mixs. Using this method, a large matrix of SCC mixes was designed to have a particle distribution modulus (q) ranging from 0.23 to 0.29. Fresh properties (such as flowability, passing ability, segregation resistance, yield stress, viscosity, set time and formwork pressure) and hardened properties (such as compressive strength, surface resistance, shrinkage, and air structure) of these concrete mixes were experimentally evaluated. In Paper 2, a concept that is based on paste-to-voids volume ratio (Vpaste/Vvoids) was employed to assess the performance of SCC mixtures. The relationship between excess paste theory and Vpaste/Vvoids was investigated. The workability, flow properties, compressive strength, shrinkage, and surface resistivity of SCC mixtures were determined at various ages. Statistical analyses, response surface models and Tukey Honestly Significant Difference (HSD) tests, were conducted to relate the mix design parameters to the concrete performance. The work discussed in Paper 3 was to apply a digital image processing (DIP) method associated with a MATLAB algorithm to evaluate cross sectional images of self-consolidating concrete (SCC). Parameters, such as inter-particle spacing between coarse aggregate particles and average mortar to aggregate ratio defined as average mortar thickness index (MTI), were derived from DIP method and applied to evaluate the static stability and develop statistical models to predict flowability of SCC mixtures. The last paper investigated technologies available to monitor changing properties of a fresh mixture, particularly for use with self-consolidating concrete (SCC). A number of techniques were used to monitor setting time, stiffening and formwork pressure of SCC mixtures. These included longitudinal (P-wave) ultrasonic wave propagation, penetrometer based setting time, semi-adiabatic calorimetry, and formwork pressure. The first study demonstrated that the concrete mixes designed using the modified Brouwers mix design algorithm and particle packing concept had a potential to reduce up to 20% SCMs content compared to existing SCC mix proportioning methods and still maintain good performance. The second paper concluded that slump flow of the SCC mixtures increased with Vpaste/Vvoids at a given viscosity of mortar. Compressive trength increases with increasing Vpaste/Vvoids up to a point (~150%), after which the strength becomes independent of Vpaste/Vvoids, even slightly decreases. Vpaste/Vvoids has little effect on the shrinkage mixtures, while SCC mixtures tend to have a higher shrinkage than CC for a given Vpaste/Vvoids. Vpaste/Vvoids has little effects on surface resistivity of SCC mixtures. The paste quality tends to have a dominant effect. Statistical analysis is an efficient tool to identify the significance of influence factors on concrete performance. In third paper, proposed DIP method and MATLAB algorithm can be successfully used to derive inter-particle spacing and MTI, and quantitatively evaluate the static stability in hardened SCC samples. These parameters can be applied to overcome the limitations and challenges of existing theoretical frames and construct statistical models associated with rheological parameters to predict flowability of SCC mixtures. The outcome of this study can be of practical value for providing an efficient and useful tool in designing mixture proportions of SCC. Last paper compared several concrete performance measurement techniques, the P-wave test and calorimetric measurements can be efficiently used to monitor the stiffening and setting of SCC mixtures.

  2. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    ERIC Educational Resources Information Center

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  3. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  4. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  5. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  6. Identification of informative features for predicting proinflammatory potentials of engine exhausts.

    PubMed

    Wang, Chia-Chi; Lin, Ying-Chi; Lin, Yuan-Chung; Jhang, Syu-Ruei; Tung, Chun-Wei

    2017-08-18

    The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment. To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm. A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively. The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.

    In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less

  8. Antagonistic Effects of a Mixture of Low-Dose Nonylphenol and Di-N-Butyl Phthalate (Monobutyl Phthalate) on the Sertoli Cells and Serum Reproductive Hormones in Prepubertal Male Rats In Vitro and In Vivo

    PubMed Central

    Xiang, Zou; Qian, Weiping; Han, Xiaodong; Li, Dongmei

    2014-01-01

    The estrogenic chemical nonylphenol (NP) and the antiandrogenic agent di-n-butyl phthalate (DBP) are regarded as widespread environmental endocrine disruptors (EDCs) which at high doses in some species of laboratory animals, such as mice and rats, have adverse effects on male reproduction and development. Given the ubiquitous coexistence of various classes of EDCs in the environment, their combined effects warrant clarification. In this study, we attempted to determine the mixture effects of NP and DBP on the testicular Sertoli cells and reproductive endocrine hormones in serum in male rats based on quantitative data analysis by a mathematical model. In the in vitro experiment, monobutyl phthalate (MBP), the active metabolite of DBP, was used instead of DBP. Sertoli cells were isolated from 9-day-old Sprague-Dawley rats followed by treatment with NP and MBP, singly or combined. Cell viability, apoptosis, necrosis, membrane integrity and inhibin-B concentration were tested. In the in vivo experiment, rats were gavaged on postnatal days 23–35 with a single or combined NP and DBP treatment. Serum reproductive hormone levels were recorded. Next, Bliss Independence model was employed to analyze the quantitative data obtained from the in vitro and in vivo investigation. Antagonism was identified as the mixture effects of NP and DBP (MBP). In this study, we demonstrate the potential of Bliss Independence model for the prediction of interactions between estrogenic and antiandrogenic agents. PMID:24676355

  9. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  10. Children Exposed to Intimate Partner Violence: Identifying Differential Effects of Family Environment on Children’s Trauma and Psychopathology Symptoms through Regression Mixture Models

    PubMed Central

    McDonald, Shelby Elaine; Shin, Sunny; Corona, Rosalie; Maternick, Anna; Graham-Bermann, Sandra A.; Ascione, Frank R.; Williams, James Herbert

    2016-01-01

    The majority of analytic approaches aimed at understanding the influence of environmental context on children’s socioemotional adjustment assume comparable effects of contextual risk and protective factors for all children. Using self-reported data from 289 maternal caregiver-child dyads, we examined the degree to which there are differential effects of severity of intimate partner violence (IPV) exposure, yearly household income, and number of children in the family on posttraumatic stress symptoms (PTS) and psychopathology symptoms (i.e., internalizing and externalizing problems) among school-age children between the ages of 7 to 12 years. A regression mixture model identified three latent classes that were primarily distinguished by differential effects of IPV exposure severity on PTS and psychopathology symptoms: (1) asymptomatic with low sensitivity to environmental factors (66% of children), (2) maladjusted with moderate sensitivity (24%), and (3) highly maladjusted with high sensitivity (10%). Children with mothers who had higher levels of education were more likely to be in the maladjusted with moderate sensitivity group than the asymptomatic with low sensitivity group. Latino children were less likely to be in both maladjusted groups compared to the asymptomatic group. Overall, the findings suggest differential effects of family environmental factors on PTS and psychopathology symptoms among children exposed to IPV. Implications for research and practice are discussed. PMID:27337691

  11. Children exposed to intimate partner violence: Identifying differential effects of family environment on children's trauma and psychopathology symptoms through regression mixture models.

    PubMed

    McDonald, Shelby Elaine; Shin, Sunny; Corona, Rosalie; Maternick, Anna; Graham-Bermann, Sandra A; Ascione, Frank R; Herbert Williams, James

    2016-08-01

    The majority of analytic approaches aimed at understanding the influence of environmental context on children's socioemotional adjustment assume comparable effects of contextual risk and protective factors for all children. Using self-reported data from 289 maternal caregiver-child dyads, we examined the degree to which there are differential effects of severity of intimate partner violence (IPV) exposure, yearly household income, and number of children in the family on posttraumatic stress symptoms (PTS) and psychopathology symptoms (i.e., internalizing and externalizing problems) among school-age children between the ages of 7-12 years. A regression mixture model identified three latent classes that were primarily distinguished by differential effects of IPV exposure severity on PTS and psychopathology symptoms: (1) asymptomatic with low sensitivity to environmental factors (66% of children), (2) maladjusted with moderate sensitivity (24%), and (3) highly maladjusted with high sensitivity (10%). Children with mothers who had higher levels of education were more likely to be in the maladjusted with moderate sensitivity group than the asymptomatic with low sensitivity group. Latino children were less likely to be in both maladjusted groups compared to the asymptomatic group. Overall, the findings suggest differential effects of family environmental factors on PTS and psychopathology symptoms among children exposed to IPV. Implications for research and practice are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Chemistry and kinetics of I2 loss in urine distillate and humidity condensate

    NASA Technical Reports Server (NTRS)

    Atwater, James E.; Wheeler, Richard R., Jr.; Olivadoti, J. T.; Sauer, Richard L.

    1992-01-01

    Time-resolved molecular absorption spectrophotometry of iodinated ersatz humidity condensates and iodinated ersatz urine distillates across the UV and visible spectral regions are used to investigate the chemistry and kinetics of I2 loss in urine distillate and humidity condensate. Single contaminant systems at equivalent concentrations are also employed to study rates of iodine. Pseudo-first order rate constants are identified for ersatz contaminant model mixtures and for individual reactive constituents. The second order bimolecular reaction of elemental iodine with formic acid, producing carbon dioxide and iodine anion, is identified as the primary mechanism underlying the decay of residual I2 in ersatz humidity concentrate.

  13. Compact determination of hydrogen isotopes

    DOE PAGES

    Robinson, David

    2017-04-06

    Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less

  14. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  15. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  16. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  17. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  18. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Identifying common donors in DNA mixtures, with applications to database searches.

    PubMed

    Slooten, K

    2017-01-01

    Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. The National Environmental Respiratory Center (NERC) experiment in multi-pollutant air quality health research: IV. Vascular effects of repeated inhalation exposure to a mixture of five inorganic gases.

    PubMed

    Mauderly, J L; Kracko, D; Brower, J; Doyle-Eisele, M; McDonald, J D; Lund, A K; Seilkop, S K

    2014-09-01

    An experiment was conducted to test the hypothesis that a mixture of five inorganic gases could reproduce certain central vascular effects of repeated inhalation exposure of apolipoprotein E-deficient mice to diesel or gasoline engine exhaust. The hypothesis resulted from preceding multiple additive regression tree (MART) analysis of a composition-concentration-response database of mice exposed by inhalation to the exhausts and other complex mixtures. The five gases were the predictors most important to MART models best fitting the vascular responses. Mice on high-fat diet were exposed 6 h/d, 7 d/week for 50 d to clean air or a mixture containing 30.6 ppm CO, 20.5 ppm NO, 1.4 ppm NO₂, 0.5 ppm SO₂, and 2.0 ppm NH₃ in air. The gas concentrations were below the maxima in the preceding studies but in the range of those in exhaust exposure levels that caused significant effects. Five indicators of stress and pro-atherosclerotic responses were measured in aortic tissue. The exposure increased all five response indicators, with the magnitude of effect and statistical significance varying among the indicators and depending on inclusion or exclusion of an apparent outlying control. With the outlier excluded, three responses approximated predicted values and two fell below predictions. The results generally supported evidence that the five gases drove the effects of exhaust, and thus supported the potential of the MART approach for identifying putative causal components of complex mixtures.

  1. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  2. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  3. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  4. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  5. Liquid class predictor for liquid handling of complex mixtures

    DOEpatents

    Seglke, Brent W [San Ramon, CA; Lekin, Timothy P [Livermore, CA

    2008-12-09

    A method of establishing liquid classes of complex mixtures for liquid handling equipment. The mixtures are composed of components and the equipment has equipment parameters. The first step comprises preparing a response curve for the components. The next step comprises using the response curve to prepare a response indicator for the mixtures. The next step comprises deriving a model that relates the components and the mixtures to establish the liquid classes.

  6. A comparison between the clinical significance and growth mixture modelling early change methods at predicting negative outcomes.

    PubMed

    Flood, Nicola; Page, Andrew; Hooke, Geoff

    2018-05-03

    Routine outcome monitoring benefits treatment by identifying potential no change and deterioration. The present study compared two methods of identifying early change and their ability to predict negative outcomes on self-report symptom and wellbeing measures. 1467 voluntary day patients participated in a 10-day group Cognitive Behaviour Therapy (CBT) program and completed the symptom and wellbeing measures daily. Early change, as defined by (a) the clinical significance method and (b) longitudinal modelling, was compared on each measure. Early change, as defined by the simpler clinical significance method, was superior at predicting negative outcomes than longitudinal modelling. The longitudinal modelling method failed to detect a group of deteriorated patients, and agreement between the early change methods and the final unchanged outcome was higher for the clinical significance method. Therapists could use the clinical significance early change method during treatment to alert them of patients at risk for negative outcomes, which in turn could allow therapists to prevent those negative outcomes from occurring.

  7. A Concentration Addition Model to Assess Activation of the Pregnane X Receptor (PXR) by Pesticide Mixtures Found in the French Diet

    PubMed Central

    de Sousa, Georges; Nawaz, Ahmad; Cravedi, Jean-Pierre; Rahmani, Roger

    2014-01-01

    French consumers are exposed to mixtures of pesticide residues in part through food consumption. As a xenosensor, the pregnane X receptor (hPXR) is activated by numerous pesticides, the combined effect of which is currently unknown. We examined the activation of hPXR by seven pesticide mixtures most likely found in the French diet and their individual components. The mixture's effect was estimated using the concentration addition (CA) model. PXR transactivation was measured by monitoring luciferase activity in hPXR/HepG2 cells and CYP3A4 expression in human hepatocytes. The three mixtures with the highest potency were evaluated using the CA model, at equimolar concentrations and at their relative proportion in the diet. The seven mixtures significantly activated hPXR and induced the expression of CYP3A4 in human hepatocytes. Of the 14 pesticides which constitute the three most active mixtures, four were found to be strong hPXR agonists, four medium, and six weak. Depending on the mixture and pesticide proportions, additive, greater than additive or less than additive effects between compounds were demonstrated. Predictions of the combined effects were obtained with both real-life and equimolar proportions at low concentrations. Pesticides act mostly additively to activate hPXR, when present in a mixture. Modulation of hPXR activation and its target genes induction may represent a risk factor contributing to exacerbate the physiological response of the hPXR signaling pathways and to explain some adverse effects in humans. PMID:25028461

  8. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data

    PubMed Central

    Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of “mixtures” has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied–for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes. PMID:27893868

  9. Lagrange thermodynamic potential and intrinsic variables for He-3 He-4 dilute solutions

    NASA Technical Reports Server (NTRS)

    Jackson, H. W.

    1983-01-01

    For a two-fluid model of dilute solutions of He-3 in liquid He-4, a thermodynamic potential is constructed that provides a Lagrangian for deriving equations of motion by a variational procedure. This Lagrangian is defined for uniform velocity fields as a (negative) Legendre transform of total internal energy, and its primary independent variables, together with their thermodynamic conjugates, are identified. Here, similarities between relations in classical physics and quantum statistical mechanics serve as a guide for developing an alternate expression for this function that reveals its character as the difference between apparent kinetic energy and intrinsic internal energy. When the He-3 concentration in the mixtures tends to zero, this expression reduces to Zilsel's formula for the Lagrangian for pure liquid He-4. An investigation of properties of the intrinsic internal energy leads to the introduction of intrinsic chemical potentials along with other intrinsic variables for the mixtures. Explicit formulas for these variables are derived for a noninteracting elementary excitation model of the fluid. Using these formulas and others also derived from quantum statistical mechanics, another equivalent expression for the Lagrangian is generated.

  10. A Computational Algorithm for Functional Clustering of Proteome Dynamics During Development

    PubMed Central

    Wang, Yaqun; Wang, Ningtao; Hao, Han; Guo, Yunqian; Zhen, Yan; Shi, Jisen; Wu, Rongling

    2014-01-01

    Phenotypic traits, such as seed development, are a consequence of complex biochemical interactions among genes, proteins and metabolites, but the underlying mechanisms that operate in a coordinated and sequential manner remain elusive. Here, we address this issue by developing a computational algorithm to monitor proteome changes during the course of trait development. The algorithm is built within the mixture-model framework in which each mixture component is modeled by a specific group of proteins that display a similar temporal pattern of expression in trait development. A nonparametric approach based on Legendre orthogonal polynomials was used to fit dynamic changes of protein expression, increasing the power and flexibility of protein clustering. By analyzing a dataset of proteomic dynamics during early embryogenesis of the Chinese fir, the algorithm has successfully identified several distinct types of proteins that coordinate with each other to determine seed development in this forest tree commercially and environmentally important to China. The algorithm will find its immediate applications for the characterization of mechanistic underpinnings for any other biological processes in which protein abundance plays a key role. PMID:24955031

  11. Transient Catalytic Combustor Model With Detailed Gas and Surface Chemistry

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Mellish, Benjamin P.; Miller, Fletcher J.; Tien, James S.

    2005-01-01

    In this work, we numerically investigate the transient combustion of a premixed gas mixture in a narrow, perfectly-insulated, catalytic channel which can represent an interior channel of a catalytic monolith. The model assumes a quasi-steady gas-phase and a transient, thermally thin solid phase. The gas phase is one-dimensional, but it does account for heat and mass transfer in a direction perpendicular to the flow via appropriate heat and mass transfer coefficients. The model neglects axial conduction in both the gas and in the solid. The model includes both detailed gas-phase reactions and catalytic surface reactions. The reactants modeled so far include lean mixtures of dry CO and CO/H2 mixtures, with pure oxygen as the oxidizer. The results include transient computations of light-off and system response to inlet condition variations. In some cases, the model predicts two different steady-state solutions depending on whether the channel is initially hot or cold. Additionally, the model suggests that the catalytic ignition of CO/O2 mixtures is extremely sensitive to small variations of inlet equivalence ratios and parts per million levels of H2.

  12. Development of a semicircular bend (SCB) test method for performance testing of Nebraska asphalt mixtures.

    DOT National Transportation Integrated Search

    2015-12-01

    Granted that most distresses in asphalt (flexible) concrete (AC) pavements are directly related to fracture, it becomes clear : that identifying and characterizing fracture properties of AC mixtures is a critical step towards a better pavement design...

  13. Research and Guidance on Drinking Water Contaminant Mixtures

    EPA Science Inventory

    Accurate assessment of potential human health risk(s) from multiple-route exposures to multiple chemicals in drinking water is needed because of widespread daily exposure to this complex mixture. Hundreds of chemicals have been identified in drinking water with the mix of chemic...

  14. THE GENOTOXICITY OF PRIORITY POLYCYCLIC AROMATIC HYDROCARBONS IN COMPLEX MIXTURES

    EPA Science Inventory

    Risk assessment of complex environmental samples suffers from difficulty in identifying toxic components, inadequacy of available toxicity data, and a paucity of knowledge about the behavior of geno(toxic) substances in complex mixtures. Lack of information about the behavior of ...

  15. Spurious Latent Classes in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.

    2011-01-01

    Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…

  16. Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.

    PubMed

    Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava

    2016-09-01

    Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    ERIC Educational Resources Information Center

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  18. Mathematical modeling of erythrocyte chimerism informs genetic intervention strategies for sickle cell disease.

    PubMed

    Altrock, Philipp M; Brendel, Christian; Renella, Raffaele; Orkin, Stuart H; Williams, David A; Michor, Franziska

    2016-09-01

    Recent advances in gene therapy and genome-engineering technologies offer the opportunity to correct sickle cell disease (SCD), a heritable disorder caused by a point mutation in the β-globin gene. The developmental switch from fetal γ-globin to adult β-globin is governed in part by the transcription factor (TF) BCL11A. This TF has been proposed as a therapeutic target for reactivation of γ-globin and concomitant reduction of β-sickle globin. In this and other approaches, genetic alteration of a portion of the hematopoietic stem cell (HSC) compartment leads to a mixture of sickling and corrected red blood cells (RBCs) in periphery. To reverse the sickling phenotype, a certain proportion of corrected RBCs is necessary; the degree of HSC alteration required to achieve a desired fraction of corrected RBCs remains unknown. To address this issue, we developed a mathematical model describing aging and survival of sickle-susceptible and normal RBCs; the former can have a selective survival advantage leading to their overrepresentation. We identified the level of bone marrow chimerism required for successful stem cell-based gene therapies in SCD. Our findings were further informed using an experimental mouse model, where we transplanted mixtures of Berkeley SCD and normal murine bone marrow cells to establish chimeric grafts in murine hosts. Our integrative theoretical and experimental approach identifies the target frequency of HSC alterations required for effective treatment of sickling syndromes in humans. Our work replaces episodic observations of such target frequencies with a mathematical modeling framework that covers a large and continuous spectrum of chimerism conditions. Am. J. Hematol. 91:931-937, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Modeling biofiltration of VOC mixtures under steady-state conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltzis, B.C.; Wojdyla, S.M.; Zarook, S.M.

    1997-06-01

    Treatment of air streams contaminated with binary volatile organic compound (VOC) mixtures in classical biofilters under steady-state conditions of operation was described with a general mathematical model. The model accounts for potential kinetic interactions among the pollutants, effects of oxygen availability on biodegradation, and biomass diversification in the filter bed. While the effects of oxygen were always taken into account, two distinct cases were considered for the experimental model validation. The first involves kinetic interactions, but no biomass differentiation, used for describing data from biofiltration of benzene/toluene mixtures. The second case assumes that each pollutant is treated by a differentmore » type of biomass. Each biomass type is assumed to form separate patches of biofilm on the solid packing material, thus kinetic interference does not occur. This model was used for describing biofiltration of ethanol/butanol mixtures. Experiments were performed with classical biofilters packed with mixtures of peat moss and perlite (2:3, volume:volume). The model equations were solved through the use of computer codes based on the fourth-order Runge-Kutta technique for the gas-phase mass balances and the method of orthogonal collocation for the concentration profiles in the biofilms. Good agreement between model predictions and experimental data was found in almost all cases. Oxygen was found to be extremely important in the case of polar VOCs (ethanol/butanol).« less

  20. Modeling the soil water retention curves of soil-gravel mixtures with regression method on the Loess Plateau of China.

    PubMed

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.

  1. Modeling the Soil Water Retention Curves of Soil-Gravel Mixtures with Regression Method on the Loess Plateau of China

    PubMed Central

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040

  2. Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Elwardany, Michael Dawoud

    The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in order to save taxpayers' money.

  3. Effect of the Key Mixture Parameters on Shrinkage of Reactive Powder Concrete

    PubMed Central

    Zubair, Ahmed

    2014-01-01

    Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 33 factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content. PMID:25050395

  4. Effect of the key mixture parameters on shrinkage of reactive powder concrete.

    PubMed

    Ahmad, Shamsad; Zubair, Ahmed; Maslehuddin, Mohammed

    2014-01-01

    Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 3(3) factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content.

  5. Kinetics of methane production from the codigestion of switchgrass and Spirulina platensis algae.

    PubMed

    El-Mashad, Hamed M

    2013-03-01

    Anaerobic batch digestion of four feedstocks was conducted at 35 and 50 °C: switchgrass; Spirulina platensis algae; and two mixtures of both switchgrass and S. platensis. Mixture 1 was composed of 87% switchgrass (based on volatile solids) and 13% S. platensis. Mixture 2 was composed of 67% switchgrass and 33% S. platensis. The kinetics of methane production from these feedstocks was studied using four first order models: exponential, Gompertz, Fitzhugh, and Cone. The methane yields after 40days of digestion at 35 °C were 355, 127, 143 and 198 ml/g VS, respectively for S. platensis, switchgrass, and Mixtures 1 and 2, while the yields at 50 °C were 358, 167, 198, and 236 ml/g VS, respectively. Based on Akaike's information criterion, the Cone model best described the experimental data. The Cone model was validated with experimental data collected from the digestion of a third mixture that was composed of 83% switchgrass and 17% S. platensis. Published by Elsevier Ltd.

  6. Advanced stability indicating chemometric methods for quantitation of amlodipine and atorvastatin in their quinary mixture with acidic degradation products

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-02-01

    Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer.

  7. Thermal conductivity of disperse insulation materials and their mixtures

    NASA Astrophysics Data System (ADS)

    Geža, V.; Jakovičs, A.; Gendelis, S.; Usiļonoks, I.; Timofejevs, J.

    2017-10-01

    Development of new, more efficient thermal insulation materials is a key to reduction of heat losses and contribution to greenhouse gas emissions. Two innovative materials developed at Thermeko LLC are Izoprok and Izopearl. This research is devoted to experimental study of thermal insulation properties of both materials as well as their mixture. Results show that mixture of 40% Izoprok and 60% of Izopearl has lower thermal conductivity than pure materials. In this work, material thermal conductivity dependence temperature is also measured. Novel modelling approach is used to model spatial distribution of disperse insulation material. Computational fluid dynamics approach is also used to estimate role of different heat transfer phenomena in such porous mixture. Modelling results show that thermal convection plays small role in heat transfer despite large fraction of air within material pores.

  8. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  9. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    NASA Astrophysics Data System (ADS)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  10. Metal-Polycyclic Aromatic Hydrocarbon Mixture Toxicity in Hyalella azteca. 1. Response Surfaces and Isoboles To Measure Non-additive Mixture Toxicity and Ecological Risk.

    PubMed

    Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G

    2015-10-06

    Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.

  11. The simultaneous mass and energy evaporation (SM2E) model.

    PubMed

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  12. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  13. An analysis of lethal and sublethal interactions among type I and type II pyrethroid pesticide mixtures using standard Hyalella azteca water column toxicity tests.

    PubMed

    Hoffmann, Krista Callinan; Deanovic, Linda; Werner, Inge; Stillway, Marie; Fong, Stephanie; Teh, Swee

    2016-10-01

    A novel 2-tiered analytical approach was used to characterize and quantify interactions between type I and type II pyrethroids in Hyalella azteca using standardized water column toxicity tests. Bifenthrin, permethrin, cyfluthrin, and lambda-cyhalothrin were tested in all possible binary combinations across 6 experiments. All mixtures were analyzed for 4-d lethality, and 2 of the 6 mixtures (permethrin-bifenthrin and permethrin-cyfluthrin) were tested for subchronic 10-d lethality and sublethal effects on swimming motility and growth. Mixtures were initially analyzed for interactions using regression analyses, and subsequently compared with the additive models of concentration addition and independent action to further characterize mixture responses. Negative interactions (antagonistic) were significant in 2 of the 6 mixtures tested, including cyfluthrin-bifenthrin and cyfluthrin-permethrin, but only on the acute 4-d lethality endpoint. In both cases mixture responses fell between the additive models of concentration addition and independent action. All other mixtures were additive across 4-d lethality, and bifenthrin-permethrin and cyfluthrin-permethrin were also additive in terms of subchronic 10-d lethality and sublethal responses. Environ Toxicol Chem 2016;35:2542-2549. © 2016 SETAC. © 2016 SETAC.

  14. Heat transfer during condensation of steam from steam-gas mixtures in the passive safety systems of nuclear power plants

    NASA Astrophysics Data System (ADS)

    Portnova, N. M.; Smirnov, Yu B.

    2017-11-01

    A theoretical model for calculation of heat transfer during condensation of multicomponent vapor-gas mixtures on vertical surfaces, based on film theory and heat and mass transfer analogy is proposed. Calculations were performed for the conditions implemented in experimental studies of heat transfer during condensation of steam-gas mixtures in the passive safety systems of PWR-type reactors of different designs. Calculated values of heat transfer coefficients for condensation of steam-air, steam-air-helium and steam-air-hydrogen mixtures at pressures of 0.2 to 0.6 MPa and of steam-nitrogen mixture at the pressures of 0.4 to 2.6 MPa were obtained. The composition of mixtures and vapor-to-surface temperature difference were varied within wide limits. Tube length ranged from 0.65 to 9.79m. The condensation of all steam-gas mixtures took place in a laminar-wave flow mode of condensate film and turbulent free convection in the diffusion boundary layer. The heat transfer coefficients obtained by calculation using the proposed model are in good agreement with the considered experimental data for both the binary and ternary mixtures.

  15. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  16. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  17. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  18. Device and method for determining freezing points

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, Balakrishnan (Inventor)

    1986-01-01

    A freezing point method and device (10) are disclosed. The method and device pertain to an inflection point technique for determining the freezing points of mixtures. In both the method and device (10), the mixture is cooled to a point below its anticipated freezing point and then warmed at a substantially linear rate. During the warming process, the rate of increase of temperature of the mixture is monitored by, for example, thermocouple (28) with the thermocouple output signal being amplified and differentiated by a differentiator (42). The rate of increase of temperature data are analyzed and a peak rate of increase of temperature is identified. In the preferred device (10) a computer (22) is utilized to analyze the rate of increase of temperature data following the warming process. Once the maximum rate of increase of temperature is identified, the corresponding temperature of the mixture is located and earmarked as being substantially equal to the freezing point of the mixture. In a preferred device (10), the computer (22), in addition to collecting the temperature and rate of change of temperature data, controls a programmable power supply (14) to provide a predetermined amount of cooling and warming current to thermoelectric modules (56).

  19. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    PubMed

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  20. A novel property of DNA - as a bioflotation reagent in mineral processing.

    PubMed

    Vasanthakumar, Balasubramanian; Ravishankar, Honnavar; Subramanian, Sankaran

    2012-01-01

    Environmental concerns regarding the use of certain chemicals in the froth flotation of minerals have led investigators to explore biological entities as potential substitutes for the reagents in vogue. Despite the fact that several microorganisms have been used for the separation of a variety of mineral systems, a detailed characterization of the biochemical molecules involved therein has not been reported so far. In this investigation, the selective flotation of sphalerite from a sphalerite-galena mineral mixture has been achieved using the cellular components of Bacillus species. The key constituent primarily responsible for the flotation of sphalerite has been identified as DNA, which functions as a bio-collector. Furthermore, using reconstitution studies, the obligatory need for the presence of non-DNA components as bio-depressants for galena has been demonstrated. A probable model involving these entities in the selective flotation of sphalerite from the mineral mixture has been discussed.

  1. A Novel Property of DNA – As a Bioflotation Reagent in Mineral Processing

    PubMed Central

    Vasanthakumar, Balasubramanian; Ravishankar, Honnavar; Subramanian, Sankaran

    2012-01-01

    Environmental concerns regarding the use of certain chemicals in the froth flotation of minerals have led investigators to explore biological entities as potential substitutes for the reagents in vogue. Despite the fact that several microorganisms have been used for the separation of a variety of mineral systems, a detailed characterization of the biochemical molecules involved therein has not been reported so far. In this investigation, the selective flotation of sphalerite from a sphalerite-galena mineral mixture has been achieved using the cellular components of Bacillus species. The key constituent primarily responsible for the flotation of sphalerite has been identified as DNA, which functions as a bio-collector. Furthermore, using reconstitution studies, the obligatory need for the presence of non-DNA components as bio-depressants for galena has been demonstrated. A probable model involving these entities in the selective flotation of sphalerite from the mineral mixture has been discussed. PMID:22768298

  2. Chloramination of Concentrated Drinking Water for Disinfection Byproduct Mixtures Creation- Indianapolis

    EPA Science Inventory

    Complex mixtures of disinfection by-products (DBPs) are formed when the disinfectant oxidizes constituents (e.g., natural organic matter (NOM) and organic pollutants) found in the source water. Since 1974, over 600 DBPs have been identified in drinking water. Despite intense iden...

  3. Nature and prevalence of non-additive toxic effects in industrially relevant mixtures of organic chemicals.

    PubMed

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2009-06-01

    The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.

  4. Petroleum Diesel Fuel and Linseed Oil Mixtures as Engine Fuels

    NASA Astrophysics Data System (ADS)

    Markov, V. A.; Kamaltdinov, V. G.; Savastenko, A. A.

    2018-01-01

    The actual problem is the use of alternative biofuels in automotive diesel engines. Insufficiently studied are the indicators of toxicity of exhaust gases of these engines operating on biofuel. The aim of the study is to identify indicators of the toxicity of exhaust gases when using of petroleum diesel fuel and linseed oil mixtures as a fuel for automotive diesel engines. Physical and chemical properties of linseed oil and its mixtures with petroleum diesel fuel are considered. Experimental researches of D-245.12C diesel are carried out on mixtures of diesel fuel and corn oil with a different composition. An opportunity of exhaust toxicity indexes improvement using these mixtures as a fuel for automobiles engine is shown.

  5. Gaseous emissions from the combustion of a waste mixture containing a high concentration of N2O.

    PubMed

    Dong, Changqing; Yang, Yongping; Zhang, Junjiao; Lu, Xuefeng

    2009-01-01

    This paper is focused on reducing the emissions from the combustion of a waste mixture containing a high concentration of N2O. A rate model and an equilibrium model were used to predict gaseous emissions from the combustion of the mixture. The influences of temperature and methane were considered, and the experimental research was carried out in a tabular reactor and a pilot combustion furnace. The results showed that for the waste mixture, the combustion temperature should be in the range of 950-1100 degrees C and the gas residence time should be 2s or higher to reduce emissions.

  6. Mixtures of charged colloid and neutral polymer: Influence of electrostatic interactions on demixing and interfacial tension

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Matthias

    2005-06-01

    The equilibrium phase behavior of a binary mixture of charged colloids and neutral, nonadsorbing polymers is studied within free-volume theory. A model mixture of charged hard-sphere macroions and ideal, coarse-grained, effective-sphere polymers is mapped first onto a binary hard-sphere mixture with nonadditive diameters and then onto an effective Asakura-Oosawa model [S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954)]. The effective model is defined by a single dimensionless parameter—the ratio of the polymer diameter to the effective colloid diameter. For high salt-to-counterion concentration ratios, a free-volume approximation for the free energy is used to compute the fluid phase diagram, which describes demixing into colloid-rich (liquid) and colloid-poor (vapor) phases. Increasing the range of electrostatic interactions shifts the demixing binodal toward higher polymer concentration, stabilizing the mixture. The enhanced stability is attributed to a weakening of polymer depletion-induced attraction between electrostatically repelling macroions. Comparison with predictions of density-functional theory reveals a corresponding increase in the liquid-vapor interfacial tension. The predicted trends in phase stability are consistent with observed behavior of protein-polysaccharide mixtures in food colloids.

  7. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  8. Development and Implementation of Metrics for Identifying Military Impulse Noise

    DTIC Science & Technology

    2010-09-01

    False Negative Rate FP False Positive FPR False Positive Rate FtC Fort Carson, CO GIS Geographic Information System GMM Gaussian mixture model Hz...60 70 80 90 100 110 Bin Number B in N um be r N um ber of D ata Points M apped to B in 14 Figure 8. Plot of typical neuron activation...signal metrics and waveform itself were saved and transmitted to the home base. There is also a provision to download the entire recorded waveform

  9. State of the art and future needs in S.I. engine combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maly, R.R.

    1994-12-31

    The paper reviews, in short, the state-of-the-art in SI engine combustion by addressing its main features: mixture formation, ignition, homogeneous combustion, pollutant formation, knock, and engine modeling. Necessary links between fundamental and practical work are clarified and discussed along with advanced diagnostics and simulation tools. The needs for further work are identified, the most important one being integration of all fundamental and practical resources to meet R and D requirements for future engines.

  10. Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures.

    PubMed

    Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A

    2018-07-01

    The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.

  11. Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.

    2002-01-01

    Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.

  12. A globally accurate theory for a class of binary mixture models

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  13. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts

    USGS Publications Warehouse

    Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.

    2013-01-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  14. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts.

    PubMed

    Dorazio, Robert M; Martin, Julien; Edwards, Holly H

    2013-07-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  15. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  16. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    ERIC Educational Resources Information Center

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  17. Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications

    ERIC Educational Resources Information Center

    Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.

    2007-01-01

    Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…

  18. Kinetic model for the vibrational energy exchange in flowing molecular gas mixtures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Offenhaeuser, F.

    1987-01-01

    The present study is concerned with the development of a computational model for the description of the vibrational energy exchange in flowing gas mixtures, taking into account a given number of energy levels for each vibrational degree of freedom. It is possible to select an arbitrary number of energy levels. The presented model uses values in the range from 10 to approximately 40. The distribution of energy with respect to these levels can differ from the equilibrium distribution. The kinetic model developed can be employed for arbitrary gaseous mixtures with an arbitrary number of vibrational degrees of freedom for each type of gas. The application of the model to CO2-H2ON2-O2-He mixtures is discussed. The obtained relations can be utilized in a study of the suitability of radiation-related transitional processes, involving the CO2 molecule, for laser applications. It is found that the computational results provided by the model agree very well with experimental data obtained for a CO2 laser. Possibilities for the activation of a 16-micron and 14-micron laser are considered.

  19. MODEL OF ADDITIVE EFFECTS OF MIXTURES OF NARCOTIC CHEMICALS

    EPA Science Inventory

    Biological effects data with single chemicals are far more abundant than with mixtures. et, environmental exposures to chemical mixtures, for example near hazardous waste sites or nonpoint sources, are very common and using test data from single chemicals to approximate effects o...

  20. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

Top