Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
A Mixtures-of-Trees Framework for Multi-Label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.
ERIC Educational Resources Information Center
Tirri, Henry; And Others
A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…
ERIC Educational Resources Information Center
Zhu, Xiaoshu
2013-01-01
The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.
Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M
1998-01-01
Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
Toumi, Héla; Boumaiza, Moncef; Millet, Maurice; Radetski, Claudemir Marcos; Camara, Baba Issa; Felten, Vincent; Masfaraud, Jean-François; Férard, Jean-François
2018-04-19
We studied the combined acute effect (i.e., after 48 h) of deltamethrin (a pyrethroid insecticide) and malathion (an organophosphate insecticide) on Daphnia magna. Two approaches were used to examine the potential interaction effects of eight mixtures of deltamethrin and malathion: (i) calculation of mixture toxicity index (MTI) and safety factor index (SFI) and (ii) response surface methodology coupled with isobole-based statistical model (using generalized linear model). According to the calculation of MTI and SFI, one tested mixture was found additive while the two other tested mixtures were found no additive (MTI) or antagonistic (SFI), but these differences between index responses are only due to differences in terminology related to these two indexes. Through the surface response approach and isobologram analysis, we concluded that there was a significant antagonistic effect of the binary mixtures of deltamethrin and malathion that occurs on D. magna immobilization, after 48 h of exposure. Index approaches and surface response approach with isobologram analysis are complementary. Calculation of mixture toxicity index and safety factor index allows identifying punctually the type of interaction for several tested mixtures, while the surface response approach with isobologram analysis integrates all the data providing a global outcome about the type of interactive effect. Only the surface response approach and isobologram analysis allowed the statistical assessment of the ecotoxicological interaction. Nevertheless, we recommend the use of both approaches (i) to identify the combined effects of contaminants and (ii) to improve risk assessment and environmental management.
Modeling of Complex Mixtures: JP-8 Toxicokinetics
2008-10-01
generic tissue compartments in which we have combined diffusion limitation and deep tissue (global tissue model). We also applied a QSAR approach for...SUBJECT TERMS jet fuel, JP-8, PBPK modeling, complex mixtures, nonane, decane, naphthalene, QSAR , alternative fuels 16. SECURITY CLASSIFICATION OF...necessary, to apply to the interaction of specific compounds with specific tissues. We have also applied a QSAR approach for estimating blood and tissue
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Marciano, Michael A; Adelman, Jonathan D
2017-03-01
The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Thermal conductivity of disperse insulation materials and their mixtures
NASA Astrophysics Data System (ADS)
Geža, V.; Jakovičs, A.; Gendelis, S.; Usiļonoks, I.; Timofejevs, J.
2017-10-01
Development of new, more efficient thermal insulation materials is a key to reduction of heat losses and contribution to greenhouse gas emissions. Two innovative materials developed at Thermeko LLC are Izoprok and Izopearl. This research is devoted to experimental study of thermal insulation properties of both materials as well as their mixture. Results show that mixture of 40% Izoprok and 60% of Izopearl has lower thermal conductivity than pure materials. In this work, material thermal conductivity dependence temperature is also measured. Novel modelling approach is used to model spatial distribution of disperse insulation material. Computational fluid dynamics approach is also used to estimate role of different heat transfer phenomena in such porous mixture. Modelling results show that thermal convection plays small role in heat transfer despite large fraction of air within material pores.
Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin
2017-01-01
Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
A statistical approach to optimizing concrete mixture design.
Ahmad, Shamsad; Alghamdi, Saeid A
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Buckley, Barbara; Farraj, Aimen
2015-09-01
Air pollution consists of a complex mixture of particulate and gaseous components. Individual criteria and other hazardous air pollutants have been linked to adverse respiratory and cardiovascular health outcomes. However, assessing risk of air pollutant mixtures is difficult since components are present in different combinations and concentrations in ambient air. Recent mechanistic studies have limited utility because of the inability to link measured changes to adverse outcomes that are relevant to risk assessment. New approaches are needed to address this challenge. The purpose of this manuscript is to describe a conceptual model, based on the adverse outcome pathway approach, which connects initiating events at the cellular and molecular level to population-wide impacts. This may facilitate hazard assessment of air pollution mixtures. In the case reports presented here, airway hyperresponsiveness and endothelial dysfunction are measurable endpoints that serve to integrate the effects of individual criteria air pollutants found in inhaled mixtures. This approach incorporates information from experimental and observational studies into a sequential series of higher order effects. The proposed model has the potential to facilitate multipollutant risk assessment by providing a framework that can be used to converge the effects of air pollutants in light of common underlying mechanisms. This approach may provide a ready-to-use tool to facilitate evaluation of health effects resulting from exposure to air pollution mixtures. Published by Elsevier Ireland Ltd.
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-01-01
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu
This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstratemore » that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.« less
Dunn, Nicholas J H; Noid, W G
2016-05-28
This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
Modeling the surface tension of complex, reactive organic-inorganic mixtures
NASA Astrophysics Data System (ADS)
Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. F.
2013-01-01
Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN) ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling fits and goodness of fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.
DOT National Transportation Integrated Search
2018-01-01
This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...
Mixture experiment methods in the development and optimization of microemulsion formulations.
Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.
Mixture models for detecting differentially expressed genes in microarrays.
Jones, Liat Ben-Tovim; Bean, Richard; McLachlan, Geoffrey J; Zhu, Justin Xi
2006-10-01
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
A simple approach to polymer mixture miscibility.
Higgins, Julia S; Lipson, Jane E G; White, Ronald P
2010-03-13
Polymeric mixtures are important materials, but the control and understanding of mixing behaviour poses problems. The original Flory-Huggins theoretical approach, using a lattice model to compute the statistical thermodynamics, provides the basic understanding of the thermodynamic processes involved but is deficient in describing most real systems, and has little or no predictive capability. We have developed an approach using a lattice integral equation theory, and in this paper we demonstrate that this not only describes well the literature data on polymer mixtures but allows new insights into the behaviour of polymers and their mixtures. The characteristic parameters obtained by fitting the data have been successfully shown to be transferable from one dataset to another, to be able to correctly predict behaviour outside the experimental range of the original data and to allow meaningful comparisons to be made between different polymer mixtures.
Bayesian Variable Selection for Hierarchical Gene-Environment and Gene-Gene Interactions
Liu, Changlu; Ma, Jianzhong; Amos, Christopher I.
2014-01-01
We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions and gene by environment interactions in the same model. Our approach incorporates the natural hierarchical structure between the main effects and interaction effects into a mixture model, such that our methods tend to remove the irrelevant interaction effects more effectively, resulting in more robust and parsimonious models. We consider both strong and weak hierarchical models. For a strong hierarchical model, both of the main effects between interacting factors must be present for the interactions to be considered in the model development, while for a weak hierarchical model, only one of the two main effects is required to be present for the interaction to be evaluated. Our simulation results show that the proposed strong and weak hierarchical mixture models work well in controlling false positive rates and provide a powerful approach for identifying the predisposing effects and interactions in gene-environment interaction studies, in comparison with the naive model that does not impose this hierarchical constraint in most of the scenarios simulated. We illustrated our approach using data for lung cancer and cutaneous melanoma. PMID:25154630
Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre
2011-05-30
In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
Spatio-temporal Bayesian model selection for disease mapping
Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K
2016-01-01
Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
Modeling the surface tension of complex, reactive organic-inorganic mixtures
NASA Astrophysics Data System (ADS)
Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye
2013-11-01
Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity
NASA Astrophysics Data System (ADS)
Chen, Hsieh; Panagiotopoulos, Athanassios Z.
2018-01-01
We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.
Optimum Tolerance Design Using Component-Amount and Mixture-Amount Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Ozler, Cenk; Sehirlioglu, Ali Kemal
2013-08-01
One type of tolerance design problem involves optimizing component and assembly tolerances to minimize the total cost (sum of manufacturing cost and quality loss). Previous literature recommended using traditional response surface (RS) designs and models to solve this type of tolerance design problem. In this article, component-amount (CA) and mixture-amount (MA) approaches are proposed as more appropriate for solving this type of tolerance design problem. The advantages of the CA and MA approaches over the RS approach are discussed. Reasons for choosing between the CA and MA approaches are also discussed. The CA and MA approaches (experimental design, response modeling,more » and optimization) are illustrated using real examples.« less
NASA Astrophysics Data System (ADS)
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.
NASA Astrophysics Data System (ADS)
Parikh, H. M.; Carlton, A. G.; Zhang, H.; Kamens, R.; Vizuete, W.
2011-12-01
Secondary organic aerosol (SOA) is simulated for 6 outdoor smog chamber experiments using a SOA model based on a kinetic chemical mechanism in conjunction with a volatility basis set (VBS) approach. The experiments include toluene, a non-SOA-forming hydrocarbon mixture, diesel exhaust or meat cooking emissions and NOx, and are performed under varying conditions of relative humidity. SOA formation from toluene is modeled using a condensed kinetic aromatic mechanism that includes partitioning of lumped semi-volatile products in particle organic-phase and incorporates particle aqueous-phase chemistry to describe uptake of glyoxal and methylglyoxal. Modeling using the kinetic mechanism alone, along with primary organic aerosol (POA) from diesel exhaust (DE) /meat cooking (MC) fails to simulate the rapid SOA formation at the beginning hours of the experiments. Inclusion of a VBS approach with the kinetic mechanism to characterize the emissions and chemistry of complex mixture of intermediate volatility organic compounds (IVOCs) from DE/MC, substantially improves SOA predictions when compared with observed data. The VBS model includes photochemical aging of IVOCs and evaporation of POA after dilution. The relative contribution of SOA mass from DE/MC is as high as 95% in the morning, but substantially decreases after mid-afternoon. For high humidity experiments, aqueous-phase SOA fraction dominates the total SOA mass at the end of the day (approximately 50%). In summary, the combined kinetic and VBS approach provides a new and improved framework to semi-explicitly model SOA from VOC precursors in conjunction with a VBS approach that can be used on complex emission mixtures comprised with hundreds of individual chemical species.
Xia, Pu; Zhang, Xiaowei; Zhang, Hanxin; Wang, Pingping; Tian, Mingming; Yu, Hongxia
2017-08-15
One of the major challenges in environmental science is monitoring and assessing the risk of complex environmental mixtures. In vitro bioassays with limited key toxicological end points have been shown to be suitable to evaluate mixtures of organic pollutants in wastewater and recycled water. Omics approaches such as transcriptomics can monitor biological effects at the genome scale. However, few studies have applied omics approach in the assessment of mixtures of organic micropollutants. Here, an omics approach was developed for profiling bioactivity of 10 water samples ranging from wastewater to drinking water in human cells by a reduced human transcriptome (RHT) approach and dose-response modeling. Transcriptional expression of 1200 selected genes were measured by an Ampliseq technology in two cell lines, HepG2 and MCF7, that were exposed to eight serial dilutions of each sample. Concentration-effect models were used to identify differentially expressed genes (DEGs) and to calculate effect concentrations (ECs) of DEGs, which could be ranked to investigate low dose response. Furthermore, molecular pathways disrupted by different samples were evaluated by Gene Ontology (GO) enrichment analysis. The ability of RHT for representing bioactivity utilizing both HepG2 and MCF7 was shown to be comparable to the results of previous in vitro bioassays. Finally, the relative potencies of the mixtures indicated by RHT analysis were consistent with the chemical profiles of the samples. RHT analysis with human cells provides an efficient and cost-effective approach to benchmarking mixture of micropollutants and may offer novel insight into the assessment of mixture toxicity in water.
Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.
Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A
2016-08-02
Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.
Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G
2015-10-06
Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.
A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability
ERIC Educational Resources Information Center
Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H.Hill
2016-01-01
Twin factor mixture modeling was used to identify temperament profiles while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (M[subscript age] = 7.4 years, SD = 0.84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A four-profile, one-factor model fit the data well.…
M. M. Clark; T. H. Fletcher; R. R. Linn
2010-01-01
The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixtureâ fraction model relying on thermodynamic...
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment ismore » achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.« less
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...
ERIC Educational Resources Information Center
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
2013-01-01
Examining children's physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children's cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol…
Functional mixture regression.
Yao, Fang; Fu, Yuejiao; Lee, Thomas C M
2011-04-01
In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Modeling the phase behavior of H2S+n-alkane binary mixtures using the SAFT-VR+D approach.
dos Ramos, M Carolina; Goff, Kimberly D; Zhao, Honggang; McCabe, Clare
2008-08-07
A statistical associating fluid theory for potential of variable range has been recently developed to model dipolar fluids (SAFT-VR+D) [Zhao and McCabe, J. Chem. Phys. 2006, 125, 104504]. The SAFT-VR+D equation explicitly accounts for dipolar interactions and their effect on the thermodynamics and structure of a fluid by using the generalized mean spherical approximation (GMSA) to describe a reference fluid of dipolar square-well segments. In this work, we apply the SAFT-VR+D approach to real mixtures of dipolar fluids. In particular, we examine the high-pressure phase diagram of hydrogen sulfide+n-alkane binary mixtures. Hydrogen sulfide is modeled as an associating spherical molecule with four off-center sites to mimic hydrogen bonding and an embedded dipole moment (micro) to describe the polarity of H2S. The n-alkane molecules are modeled as spherical segments tangentially bonded together to form chains of length m, as in the original SAFT-VR approach. By using simple Lorentz-Berthelot combining rules, the theoretical predictions from the SAFT-VR+D equation are found to be in excellent overall agreement with experimental data. In particular, the theory is able to accurately describe the different types of phase behavior observed for these mixtures as the molecular weight of the alkane is varied: type III phase behavior, according to the scheme of classification by Scott and Konynenburg, for the H2S+methane system, type IIA (with the presence of azeotropy) for the H2S+ethane and+propane mixtures; and type I phase behavior for mixtures of H2S and longer n-alkanes up to n-decane. The theory is also able to predict in a qualitative manner the solubility of hydrogen sulfide in heavy n-alkanes.
Prediction of the properties anhydrite construction mixtures based on neural network approach
NASA Astrophysics Data System (ADS)
Fedorchuk, Y. M.; Zamyatin, N. V.; Smirnov, G. V.; Rusina, O. N.; Sadenova, M. A.
2017-08-01
The article considered the question of applying the backstop modeling mechanism from the components of anhydride mixtures in the process of managing the technological processes of receiving construction products which based on fluoranhydrite.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.
Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava
2016-09-01
Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
Systematic Proteomic Approach to Characterize the Impacts of ...
Chemical interactions have posed a big challenge in toxicity characterization and human health risk assessment of environmental mixtures. To characterize the impacts of chemical interactions on protein and cytotoxicity responses to environmental mixtures, we established a systems biology approach integrating proteomics, bioinformatics, statistics, and computational toxicology to measure expression or phosphorylation levels of 21 critical toxicity pathway regulators and 445 downstream proteins in human BEAS-28 cells treated with 4 concentrations of nickel, 2 concentrations each of cadmium and chromium, as well as 12 defined binary and 8 defined ternary mixtures of these metals in vitro. Multivariate statistical analysis and mathematical modeling of the metal-mediated proteomic response patterns showed a high correlation between changes in protein expression or phosphorylation and cellular toxic responses to both individual metals and metal mixtures. Of the identified correlated proteins, only a small set of proteins including HIF-1a is likely to be responsible for selective cytotoxic responses to different metals and metals mixtures. Furthermore, support vector machine learning was utilized to computationally predict protein responses to uncharacterized metal mixtures using experimentally generated protein response profiles corresponding to known metal mixtures. This study provides a novel proteomic approach for characterization and prediction of toxicities of
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-01-01
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
ERIC Educational Resources Information Center
Lubke, Gitta; Tueller, Stephen
2010-01-01
Taxometric procedures such as MAXEIG and factor mixture modeling (FMM) are used in latent class clustering, but they have very different sets of strengths and weaknesses. Taxometric procedures, popular in psychiatric and psychopathology applications, do not rely on distributional assumptions. Their sole purpose is to detect the presence of latent…
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-12-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-01-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894
Sacristan, C J; Dupont, T; Sicot, O; Leclaire, P; Verdière, K; Panneton, R; Gong, X L
2016-10-01
The acoustic properties of an air-saturated macroscopically inhomogeneous aluminum foam in the equivalent fluid approximation are studied. A reference sample built by forcing a highly compressible melamine foam with conical shape inside a constant diameter rigid tube is studied first. In this process, a radial compression varying with depth is applied. With the help of an assumption on the compressed pore geometry, properties of the reference sample can be modelled everywhere in the thickness and it is possible to use the classical transfer matrix method as theoretical reference. In the mixture approach, the material is viewed as a mixture of two known materials placed in a patchwork configuration and with proportions of each varying with depth. The properties are derived from the use of a mixing law. For the reference sample, the classical transfer matrix method is used to validate the experimental results. These results are used to validate the mixture approach. The mixture approach is then used to characterize a porous aluminium for which only the properties of the external faces are known. A porosity profile is needed and is obtained from the simulated annealing optimization process.
Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel
2010-12-21
How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.
Dey, Dipesh K; Guha, Saumyen
2007-02-15
Phospholipid fatty acids (PLFAs) as biomarkers are well established in the literature. A general method based on least square approximation (LSA) was developed for the estimation of community structure from the PLFA signature of a mixed population where biomarker PLFA signatures of the component species were known. Fatty acid methyl ester (FAME) standards were used as species analogs and mixture of the standards as representative of the mixed population. The PLFA/FAME signatures were analyzed by gas chromatographic separation, followed by detection in flame ionization detector (GC-FID). The PLFAs in the signature were quantified as relative weight percent of the total PLFA. The PLFA signatures were analyzed by the models to predict community structure of the mixture. The LSA model results were compared with the existing "functional group" approach. Both successfully predicted community structure of mixed population containing completely unrelated species with uncommon PLFAs. For slightest intersection in PLFA signatures of component species, the LSA model produced better results. This was mainly due to inability of the "functional group" approach to distinguish the relative amounts of the common PLFA coming from more than one species. The performance of the LSA model was influenced by errors in the chromatographic analyses. Suppression (or enhancement) of a component's PLFA signature in chromatographic analysis of the mixture, led to underestimation (or overestimation) of the component's proportion in the mixture by the model. In mixtures of closely related species with common PLFAs, the errors in the common components were adjusted across the species by the model.
On an interface of the online system for a stochastic analysis of the varied information flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; MIREA, MGUPI; Kuzmin, Victor Yu.
The article describes a possible approach to the construction of an interface of an online asynchronous system that allows researchers to analyse varied information flows. The implemented stochastic methods are based on the mixture models and the method of moving separation of mixtures. The general ideas of the system functionality are demonstrated on an example for some moments of a finite normal mixture.
NASA Astrophysics Data System (ADS)
Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de
2018-03-01
Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.
NASA Astrophysics Data System (ADS)
Budi Astuti, Ani; Iriawan, Nur; Irhamah; Kuswanto, Heri; Sasiarini, Laksmi
2017-10-01
Bayesian statistics proposes an approach that is very flexible in the number of samples and distribution of data. Bayesian Mixture Model (BMM) is a Bayesian approach for multimodal models. Diabetes Mellitus (DM) is more commonly known in the Indonesian community as sweet pee. This disease is one type of chronic non-communicable diseases but it is very dangerous to humans because of the effects of other diseases complications caused. WHO reports in 2013 showed DM disease was ranked 6th in the world as the leading causes of human death. In Indonesia, DM disease continues to increase over time. These research would be studied patterns and would be built the BMM models of the DM data through simulation studies where the simulation data built on cases of blood sugar levels of DM patients in RSUD Saiful Anwar Malang. The results have been successfully demonstrated pattern of distribution of the DM data which has a normal mixture distribution. The BMM models have succeed to accommodate the real condition of the DM data based on the data driven concept.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
Additive and synergistic antiandrogenic activities of mixtures of azol fungicides and vinclozolin.
Christen, Verena; Crettaz, Pierre; Fent, Karl
2014-09-15
Many pesticides including pyrethroids and azole fungicides are suspected to have an endocrine disrupting property. At present, the joint activity of compound mixtures is only marginally known. Here we tested the hypothesis that the antiandrogenic activity of mixtures of azole fungicides can be predicted by the concentration addition (CA) model. The antiandrogenic activity was assessed in MDA-kb2 cells. Following assessing single compounds activities mixtures of azole fungicides and vinclozolin were investigated. Interactions were analyzed by direct comparison between experimental and estimated dose-response curves assuming CA, followed by an analysis by the isobole method and the toxic unit approach. The antiandrogenic activity of pyrethroids deltamethrin, cypermethrin, fenvalerate and permethrin was weak, while the azole fungicides tebuconazole, propiconazole, epoxiconazole, econazole and vinclozolin exhibited strong antiandrogenic activity. Ten binary and one ternary mixture combinations of five antiandrogenic fungicides were assessed at equi-effective concentrations of EC25 and EC50. Isoboles indicated that about 50% of the binary mixtures were additive and 50% synergistic. Synergism was even more frequently indicated by the toxic unit approach. Our data lead to the conclusion that interactions in mixtures follow the CA model. However, a surprisingly high percentage of synergistic interactions occurred. Therefore, the mixture activity of antiandrogenic azole fungicides is at least additive. Mixtures should also be considered for additive antiandrogenic activity in hazard and risk assessment. Our evaluation provides an appropriate "proof of concept", but whether it equally translates to in vivo effects should further be investigated. Copyright © 2014 Elsevier Inc. All rights reserved.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia
2013-05-30
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.
Background: Air pollution consists of a complex mixture of particulate and gaseous components. Individual criteria and other hazardous air pollutants have been linked to adverse respiratory and cardiovascular health outcomes. However, assessing risk of air pollutant mixtures is d...
Item selection via Bayesian IRT models.
Arima, Serena
2015-02-10
With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.
Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham
2016-03-15
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.
Romine, Jason G.; Perry, Russell W.; Johnston, Samuel V.; Fitzer, Christopher W.; Pagliughi, Stephen W.; Blake, Aaron R.
2013-01-01
Mixture models proved valuable as a means to differentiate between salmonid smolts and predators that consumed salmonid smolts. However, successful application of this method requires that telemetered fishes and their predators exhibit measurable differences in movement behavior. Our approach is flexible, allows inclusion of multiple track statistics and improves upon rule-based manual classification methods.
ERIC Educational Resources Information Center
Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.
2015-01-01
The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Mixture theory-based poroelasticity as a model of interstitial tissue growth.
Cowin, Stephen C; Cardoso, Luis
2012-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Experimental designs and risk assessment in combination toxicology: panel discussion.
Henschler, D; Bolt, H M; Jonker, D; Pieters, M N; Groten, J P
1996-01-01
Advancing our knowledge on the toxicology of combined exposures to chemicals and implementation of this knowledge in guidelines for health risk assessment of such combined exposures are necessities dictated by the simple fact that humans are continuously exposed to a multitude of chemicals. A prerequisite for successful research and fruitful discussions on the toxicology of combined exposures (mixtures of chemicals) is the use of defined terminology implemented by an authoritative international body such as, for example, the International Union of Pure and Applied Chemistry (IUPAC) Toxicology Committee. The extreme complexity of mixture toxicology calls for new research methodologies to study interactive effects, taking into account limited resources. Of these methodologies, statistical designs and mathematical modelling of toxicokinetics and toxicodynamics seem to be most promising. Emphasis should be placed on low-dose modelling and experimental validation. The scientifically sound so-called bottom-up approach should be supplemented with more pragmatic approaches, focusing on selection of the most hazardous chemicals in a mixture and careful consideration of the mode of action and possible interactive effects of these chemicals. Pragmatic approaches may be of particular importance to study and evaluate complex mixtures; after identification of the 'top ten' (most risky) chemicals in the mixture they can be examined and evaluated as a defined (simple) chemical mixture. In setting exposure limits for individual chemicals, the use of an additional safety factor to compensate for potential increased risk due to simultaneous exposure to other chemicals, has no clear scientific justification. The use of such an additional factor is a political rather than a scientific choice.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Mixture Hidden Markov Models in Finance Research
NASA Astrophysics Data System (ADS)
Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia
Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.
Prevalence Incidence Mixture Models
The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Larval aquatic insect responses to cadmium and zinc in experimental streams
Mebane, Christopher A.; Schmidt, Travis S.; Balistrieri, Laurie S.
2017-01-01
To evaluate the risks of metal mixture effects to natural stream communities under ecologically relevant conditions, the authors conducted 30-d tests with benthic macroinvertebrates exposed to cadmium (Cd) and zinc (Zn) in experimental streams. The simultaneous exposures were with Cd and Zn singly and with Cd+Zn mixtures at environmentally relevant ratios. The tests produced concentration–response patterns that for individual taxa were interpreted in the same manner as classic single-species toxicity tests and for community metrics such as taxa richness and mayfly (Ephemeroptera) abundance were interpreted in the same manner as with stream survey data. Effect concentrations from the experimental stream exposures were usually 2 to 3 orders of magnitude lower than those from classic single-species tests. Relative to a response addition model, which assumes that the joint toxicity of the mixtures can be predicted from the product of their responses to individual toxicants, the Cd+Zn mixtures generally showed slightly less than additive toxicity. The authors applied a modeling approach called Tox to explore the mixture toxicity results and to relate the experimental stream results to field data. The approach predicts the accumulation of toxicants (hydrogen, Cd, and Zn) on organisms using a 2-pKa bidentate model that defines interactions between dissolved cations and biological receptors (biotic ligands) and relates that accumulation through a logistic equation to biological response. The Tox modeling was able to predict Cd+Zn mixture responses from the single-metal exposures as well as responses from field data. The similarity of response patterns between the 30-d experimental stream tests and field data supports the environmental relevance of testing aquatic insects in experimental streams.
Additive and synergistic antiandrogenic activities of mixtures of azol fungicides and vinclozolin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christen, Verena; Crettaz, Pierre; Fent, Karl, E-mail: karl.fent@fhnw.ch
Objective: Many pesticides including pyrethroids and azole fungicides are suspected to have an endocrine disrupting property. At present, the joint activity of compound mixtures is only marginally known. Here we tested the hypothesis that the antiandrogenic activity of mixtures of azole fungicides can be predicted by the concentration addition (CA) model. Methods: The antiandrogenic activity was assessed in MDA-kb2 cells. Following assessing single compounds activities mixtures of azole fungicides and vinclozolin were investigated. Interactions were analyzed by direct comparison between experimental and estimated dose–response curves assuming CA, followed by an analysis by the isobole method and the toxic unit approach.more » Results: The antiandrogenic activity of pyrethroids deltamethrin, cypermethrin, fenvalerate and permethrin was weak, while the azole fungicides tebuconazole, propiconazole, epoxiconazole, econazole and vinclozolin exhibited strong antiandrogenic activity. Ten binary and one ternary mixture combinations of five antiandrogenic fungicides were assessed at equi-effective concentrations of EC{sub 25} and EC{sub 50}. Isoboles indicated that about 50% of the binary mixtures were additive and 50% synergistic. Synergism was even more frequently indicated by the toxic unit approach. Conclusion: Our data lead to the conclusion that interactions in mixtures follow the CA model. However, a surprisingly high percentage of synergistic interactions occurred. Therefore, the mixture activity of antiandrogenic azole fungicides is at least additive. Practice: Mixtures should also be considered for additive antiandrogenic activity in hazard and risk assessment. Implications: Our evaluation provides an appropriate “proof of concept”, but whether it equally translates to in vivo effects should further be investigated. - Highlights: • Humans are exposed to pesticide mixtures such as pyrethroids and azole fungicides. • We assessed the antiandrogenicity of pyrethroids and azole fungizides. • Many azole fungicides showed significant antiandrogenic activity . • Many binary mixtures of antiandrogenic azole fungicides showed synergistic interactions. • Concentration addition of pesticides in mixtures should be considered.« less
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Thresholding functional connectomes by means of mixture modeling.
Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F
2018-05-01
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Ground-Based Aerosol Measurements | Science Inventory ...
Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomon et al. 2014) as well as in research studies. In this approach, air, at a specified flow rate and time period, is typically drawn through an inlet, usually a size selective inlet, and then drawn through filters, 1 INTRODUCTION Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomo
Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan
2017-12-01
A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
Santos, Radleigh G.; Appel, Jon R.; Giulianotti, Marc A.; Edwards, Bruce S.; Sklar, Larry A.; Houghten, Richard A.; Pinilla, Clemencia
2014-01-01
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays. PMID:23722730
Inferring Short-Range Linkage Information from Sequencing Chromatograms
Beggel, Bastian; Neumann-Fraune, Maria; Kaiser, Rolf; Verheyen, Jens; Lengauer, Thomas
2013-01-01
Direct Sanger sequencing of viral genome populations yields multiple ambiguous sequence positions. It is not straightforward to derive linkage information from sequencing chromatograms, which in turn hampers the correct interpretation of the sequence data. We present a method for determining the variants existing in a viral quasispecies in the case of two nearby ambiguous sequence positions by exploiting the effect of sequence context-dependent incorporation of dideoxynucleotides. The computational model was trained on data from sequencing chromatograms of clonal variants and was evaluated on two test sets of in vitro mixtures. The approach achieved high accuracies in identifying the mixture components of 97.4% on a test set in which the positions to be analyzed are only one base apart from each other, and of 84.5% on a test set in which the ambiguous positions are separated by three bases. In silico experiments suggest two major limitations of our approach in terms of accuracy. First, due to a basic limitation of Sanger sequencing, it is not possible to reliably detect minor variants with a relative frequency of no more than 10%. Second, the model cannot distinguish between mixtures of two or four clonal variants, if one of two sets of linear constraints is fulfilled. Furthermore, the approach requires repetitive sequencing of all variants that might be present in the mixture to be analyzed. Nevertheless, the effectiveness of our method on the two in vitro test sets shows that short-range linkage information of two ambiguous sequence positions can be inferred from Sanger sequencing chromatograms without any further assumptions on the mixture composition. Additionally, our model provides new insights into the established and widely used Sanger sequencing technology. The source code of our method is made available at http://bioinf.mpi-inf.mpg.de/publications/beggel/linkageinformation.zip. PMID:24376502
Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.
Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward
2015-04-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-11-01
Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956
Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.
Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A
2015-07-01
Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L
2017-11-20
We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chong
We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energymore » exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.« less
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Lebrun, Jérémie D; Uher, Emmanuelle; Fechner, Lise C
2017-12-01
Metals are usually present as mixtures at low concentrations in aquatic ecosystems. However, the toxicity and sub-lethal effects of metal mixtures on organisms are still poorly addressed in environmental risk assessment. Here we investigated the biochemical and behavioural responses of Gammarus fossarum to Cu, Cd, Ni, Pb and Zn tested individually or in mixture (M2X) at concentrations twice the levels of environmental quality standards (EQSs) from the European Water Framework Directive. The same metal mixture was also tested with concentrations equivalent to EQSs (M1X), thus in a regulatory context, as EQSs are proposed to protect aquatic biota. For each exposure condition, mortality, locomotion, respiration and enzymatic activities involved in digestive metabolism and moult were monitored over a 120h exposure period. Multi-metric variations were summarized by the integrated biomarker response index (IBR). Mono-metallic exposures shed light on biological alterations occurring at environmental exposure levels in gammarids and depending on the considered metal and gender. As regards mixtures, biomarkers were altered for both M2X and M1X. However, no additive or synergistic effect of metals was observed comparing to mono-metallic exposures. Indeed, bioaccumulation data highlighted competitive interactions between metals in M2X, decreasing subsequently their internalisation and toxicity. IBR values indicated that the health of gammarids was more impacted by M1X than M2X, because of reduced competitions and enhanced uptakes of metals for the mixture at lower, EQS-like concentrations. Models using bioconcentration data obtained from mono-metallic exposures generated successful predictions of global toxicity both for M1X and M2X. We conclude that sub-lethal effects of mixtures identified by the multi-biomarker approach can lead to disturbances in population dynamics of gammarids. Although IBR-based models offer promising lines of enquiry to predict metal mixture toxicity, further studies are needed to confirm their predictive quality on larger ranges of metallic combinations before their use in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Binbing Yu; Tiwari, Ram C; Feuer, Eric J
2011-06-01
Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
Siebers, Nina; Kruse, Jens; Eckhardt, Kai-Uwe; Hu, Yongfeng; Leinweber, Peter
2012-07-01
Cadmium (Cd) has a high toxicity and resolving its speciation in soil is challenging but essential for estimating the environmental risk. In this study partial least-square (PLS) regression was tested for its capability to deconvolute Cd L(3)-edge X-ray absorption near-edge structure (XANES) spectra of multi-compound mixtures. For this, a library of Cd reference compound spectra and a spectrum of a soil sample were acquired. A good coefficient of determination (R(2)) of Cd compounds in mixtures was obtained for the PLS model using binary and ternary mixtures of various Cd reference compounds proving the validity of this approach. In order to describe complex systems like soil, multi-compound mixtures of a variety of Cd compounds must be included in the PLS model. The obtained PLS regression model was then applied to a highly Cd-contaminated soil revealing Cd(3)(PO(4))(2) (36.1%), Cd(NO(3))(2)·4H(2)O (24.5%), Cd(OH)(2) (21.7%), CdCO(3) (17.1%) and CdCl(2) (0.4%). These preliminary results proved that PLS regression is a promising approach for a direct determination of Cd speciation in the solid phase of a soil sample.
Mimosa: Mixture Model of Co-expression to Detect Modulators of Regulatory Interaction
NASA Astrophysics Data System (ADS)
Hansen, Matthew; Everett, Logan; Singh, Larry; Hannenhalli, Sridhar
Functionally related genes tend to be correlated in their expression patterns across multiple conditions and/or tissue-types. Thus co-expression networks are often used to investigate functional groups of genes. In particular, when one of the genes is a transcription factor (TF), the co-expression-based interaction is interpreted, with caution, as a direct regulatory interaction. However, any particular TF, and more importantly, any particular regulatory interaction, is likely to be active only in a subset of experimental conditions. Moreover, the subset of expression samples where the regulatory interaction holds may be marked by presence or absence of a modifier gene, such as an enzyme that post-translationally modifies the TF. Such subtlety of regulatory interactions is overlooked when one computes an overall expression correlation. Here we present a novel mixture modeling approach where a TF-Gene pair is presumed to be significantly correlated (with unknown coefficient) in a (unknown) subset of expression samples. The parameters of the model are estimated using a Maximum Likelihood approach. The estimated mixture of expression samples is then mined to identify genes potentially modulating the TF-Gene interaction. We have validated our approach using synthetic data and on three biological cases in cow and in yeast. While limited in some ways, as discussed, the work represents a novel approach to mine expression data and detect potential modulators of regulatory interactions.
A general mixture model and its application to coastal sandbar migration simulation
NASA Astrophysics Data System (ADS)
Liang, Lixin; Yu, Xiping
2017-04-01
A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Hoffmann, Krista Callinan; Deanovic, Linda; Werner, Inge; Stillway, Marie; Fong, Stephanie; Teh, Swee
2016-10-01
A novel 2-tiered analytical approach was used to characterize and quantify interactions between type I and type II pyrethroids in Hyalella azteca using standardized water column toxicity tests. Bifenthrin, permethrin, cyfluthrin, and lambda-cyhalothrin were tested in all possible binary combinations across 6 experiments. All mixtures were analyzed for 4-d lethality, and 2 of the 6 mixtures (permethrin-bifenthrin and permethrin-cyfluthrin) were tested for subchronic 10-d lethality and sublethal effects on swimming motility and growth. Mixtures were initially analyzed for interactions using regression analyses, and subsequently compared with the additive models of concentration addition and independent action to further characterize mixture responses. Negative interactions (antagonistic) were significant in 2 of the 6 mixtures tested, including cyfluthrin-bifenthrin and cyfluthrin-permethrin, but only on the acute 4-d lethality endpoint. In both cases mixture responses fell between the additive models of concentration addition and independent action. All other mixtures were additive across 4-d lethality, and bifenthrin-permethrin and cyfluthrin-permethrin were also additive in terms of subchronic 10-d lethality and sublethal responses. Environ Toxicol Chem 2016;35:2542-2549. © 2016 SETAC. © 2016 SETAC.
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-01-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127
Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-26
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
NASA Astrophysics Data System (ADS)
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
Mollenhauer, Robert; Brewer, Shannon K.
2017-01-01
Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes
Brown, Colin D.; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-01-01
Abstract Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant‐protection products applied annually for 20 yr and at a watershed scale for a mixed land‐use scenario over 30 yr with 12 plant‐protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real‐world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario‐based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674–689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:29193235
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes.
Holmes, Christopher M; Brown, Colin D; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-03-01
Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant-protection products applied annually for 20 yr and at a watershed scale for a mixed land-use scenario over 30 yr with 12 plant-protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real-world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario-based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674-689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
A Mixture Modeling Framework for Differential Analysis of High-Throughput Data
Taslim, Cenny; Lin, Shili
2014-01-01
The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284
An introduction to mixture item response theory models.
De Ayala, R J; Santiago, S Y
2017-02-01
Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Sediment unmixing using detrital geochronology
Sharman, Glenn R.; Johnstone, Samuel
2017-01-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the influence of environmental forcings (e.g., tectonism, climate) on the earth’s surface. Here we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First we summarize ‘top-down’ mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions (‘parents’) that characterize a derived sample or set of samples (‘daughters’). Second we propose the use of ‘bottom-up’ methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable mixtures over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
NASA Astrophysics Data System (ADS)
Choiri, S.; Ainurofiq, A.
2018-03-01
Drug release from a montmorillonite (MMT) matrix is a complex mechanism controlled by swelling mechanism of MMT and an interaction of drug and MMT. The aim of this research was to explain a suitable model of the drug release mechanism from MMT and its binary mixture with a hydrophilic polymer in the controlled release formulation based on a compartmental modelling approach. Theophylline was used as a drug model and incorporated into MMT and a binary mixture with hydroxyl propyl methyl cellulose (HPMC) as a hydrophilic polymer, by a kneading method. The dissolution test was performed and the modelling of drug release was assisted by a WinSAAM software. A 2 model was purposed based on the swelling capability and basal spacing of MMT compartments. The model evaluation was carried out to goodness of fit and statistical parameters and models were validated by a cross-validation technique. The drug release from MMT matrix regulated by a burst release mechanism of unloaded drug, swelling ability, basal spacing of MMT compartment, and equilibrium between basal spacing and swelling compartments. Furthermore, the addition of HPMC in MMT system altered the presence of swelling compartment and equilibrium between swelling and basal spacing compartment systems. In addition, a hydrophilic polymer reduced the burst release mechanism of unloaded drug.
Modern Methods for Modeling Change in Obesity Research in Nursing.
Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E
2017-08-01
Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.
Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier
2011-10-01
Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.
Chemical mixtures in potable water in the U.S.
Ryker, Sarah J.
2014-01-01
In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.
Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C
2012-09-01
Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.
Wang, Cheng; He, Lidong; Li, Da-Wei; Bruschweiler-Li, Lei; Marshall, Alan G; Brüschweiler, Rafael
2017-10-06
Metabolite identification in metabolomics samples is a key step that critically impacts downstream analysis. We recently introduced the SUMMIT NMR/mass spectrometry (MS) hybrid approach for the identification of the molecular structure of unknown metabolites based on the combination of NMR, MS, and combinatorial cheminformatics. Here, we demonstrate the feasibility of the approach for an untargeted analysis of both a model mixture and E. coli cell lysate based on 2D/3D NMR experiments in combination with Fourier transform ion cyclotron resonance MS and MS/MS data. For 19 of the 25 model metabolites, SUMMIT yielded complete structures that matched those in the mixture independent of database information. Of those, seven top-ranked structures matched those in the mixture, and four of those were further validated by positive ion MS/MS. For five metabolites, not part of the 19 metabolites, correct molecular structural motifs could be identified. For E. coli, SUMMIT MS/NMR identified 20 previously known metabolites with three or more 1 H spins independent of database information. Moreover, for 15 unknown metabolites, molecular structural fragments were determined consistent with their spin systems and chemical shifts. By providing structural information for entire metabolites or molecular fragments, SUMMIT MS/NMR greatly assists the targeted or untargeted analysis of complex mixtures of unknown compounds.
Mazurek, Monica A
2002-12-01
This article describes a chemical characterization approach for complex organic compound mixtures associated with fine atmospheric particles of diameters less than 2.5 m (PM2.5). It relates molecular- and bulk-level chemical characteristics of the complex mixture to atmospheric chemistry and to emission sources. Overall, the analytical approach describes the organic complex mixtures in terms of a chemical mass balance (CMB). Here, the complex mixture is related to a bulk elemental measurement (total carbon) and is broken down systematically into functional groups and molecular compositions. The CMB and molecular-level information can be used to understand the sources of the atmospheric fine particles through conversion of chromatographic data and by incorporation into receptor-based CMB models. Once described and quantified within a mass balance framework, the chemical profiles for aerosol organic matter can be applied to existing air quality issues. Examples include understanding health effects of PM2.5 and defining and controlling key sources of anthropogenic fine particles. Overall, the organic aerosol compositional data provide chemical information needed for effective PM2.5 management.
Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A
2009-08-15
The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
Thermodynamic properties of model CdTe/CdSe mixtures
van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...
2015-02-20
We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less
Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J
2008-07-01
n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post dosing with 1200 ppm. Due to these improvements, and particularly the reconciliation between measured and fitted partition coefficients, especially fat, we have greater confidence in using the proposed model for dose, species, and route of exposure extrapolations and as a harmonized model approach for other hydrocarbon components of mixtures.
A Twin Factor Mixture Modeling Approach to Childhood Temperament: Differential Heritability
Scott, Brandon G.; Lemery-Chalfant, Kathryn; Clifford, Sierra; Tein, Jenn-Yun; Stoll, Ryan; Goldsmith, H. Hill
2016-01-01
Twin factor mixture modeling was used to identify temperament profiles, while simultaneously estimating a latent factor model for each profile with a sample of 787 twin pairs (Mage =7.4 years; SD = .84; 49% female; 88.3% Caucasian), using mother- and father-reported temperament. A 4-profile, 1-factor model fit the data well. Profiles included ‘Regulated, Typical Reactive’, ‘Well-regulated, Positive Reactive’, ‘Regulated, Surgent’, and ‘Dysregulated, Negative Reactive.’ All profiles were heritable, with heritability lower and shared environment also contributing to membership in the ‘Regulated, Typical Reactive’ and ‘Dysregulated, Negative Reactive’ profiles. PMID:27291568
A numerical model for boiling heat transfer coefficient of zeotropic mixtures
NASA Astrophysics Data System (ADS)
Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo
2017-12-01
Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
Thermodynamics of Thomas-Fermi screened Coulomb systems
NASA Technical Reports Server (NTRS)
Firey, B.; Ashcroft, N. W.
1977-01-01
We obtain in closed analytic form, estimates for the thermodynamic properties of classical fluids with pair potentials of Yukawa type, with special reference to dense fully ionized plasmas with Thomas-Fermi or Debye-Hueckel screening. We further generalize the hard-sphere perturbative approach used for similarly screened two-component mixtures, and demonstrate phase separation in this simple model of a liquid mixture of metallic helium and hydrogen.
Modular analysis of the probabilistic genetic interaction network.
Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting
2011-03-15
Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
Mixture risk assessment: a case study of Monsanto experiences.
Nair, R S; Dudek, B R; Grothe, D R; Johannsen, F R; Lamb, I C; Martens, M A; Sherman, J H; Stevens, M W
1996-01-01
Monsanto employs several pragmatic approaches for evaluating the toxicity of mixtures. These approaches are similar to those recommended by many national and international agencies. When conducting hazard and risk assessments, priority is always given to using data collected directly on the mixture of concern. To provide an example of the first tier of evaluation, actual data on acute respiratory irritation studies on mixtures were evaluated to determine whether the principle of additivity was applicable to the mixture evaluated. If actual data on the mixture are unavailable, extrapolation across similar mixtures is considered. Because many formulations are quite similar in composition, the toxicity data from one mixture can be extended to a closely related mixture in a scientifically justifiable manner. An example of a family of products where such extrapolations have been made is presented to exemplify this second approach. Lastly, if data on similar mixtures are unavailable, data on component fractions are used to predict the toxicity of the mixture. In this third approach, process knowledge and scientific judgement are used to determine how the known toxicological properties of the individual fractions affect toxicity of the mixture. Three examples of plant effluents where toxicological data on fractions were used to predict the toxicity of the mixture are discussed. The results of the analysis are used to discuss the predictive value of each of the above mentioned toxicological approaches for evaluating chemical mixtures.
Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J
2006-07-01
Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).
Application of Project Portfolio Management
NASA Astrophysics Data System (ADS)
Pankowska, Malgorzata
The main goal of the chapter is the presentation of the application project portfolio management approach to support development of e-Municipality and public administration information systems. The models of how people publish and utilize information on the web have been transformed continually. Instead of simply viewing on static web pages, users publish their own content through blogs and photo- and video-sharing slides. Analysed in this chapter, ICT (Information Communication Technology) projects for municipalities cover the mixture of the static web pages, e-Government information systems, and Wikis. So, for the management of the ICT projects' mixtures the portfolio project management approach is proposed.
Organic synthesis in experimental impact shocks
NASA Technical Reports Server (NTRS)
McKay, C. P.; Borucki, W. J.
1997-01-01
Laboratory simulations of shocks created with a high-energy laser demonstrate that the efficacy of organic production depends on the molecular, not just the elemental composition of the shocked gas. In a methane-rich mixture that simulates a low-temperature equilibrium mixture of cometary material, hydrogen cyanide and acetylene were produced with yields of 5 x 10(17) molecules per joule. Repeated shocking of the methane-rich mixture produced amine groups, suggesting the possible synthesis of amino acids. No organic molecules were produced in a carbon dioxide-rich mixture, which is at odds with thermodynamic equilibrium approaches to shock chemistry and has implications for the modeling of shock-produced organic molecules on early Earth.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu
2017-01-01
The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui
2014-05-01
To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.
Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.
Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J
2017-06-01
Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.
Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn
2013-01-01
Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059
Mapping behavioral landscapes for animal movement: a finite mixture modeling approach
Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.
2013-01-01
Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using statistical models, thereby linking connectivity evaluations to empirical data.
A comparative study of mixture cure models with covariate
NASA Astrophysics Data System (ADS)
Leng, Oh Yit; Khalid, Zarina Mohd
2017-05-01
In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.
ERIC Educational Resources Information Center
Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby
2006-01-01
This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…
Near-infrared reflectance spectra of mixtures of kaolin-group minerals: Use in clay mineral studies
Crowley, James K.; Vergo, Norma
1988-01-01
Near-infrared (NIR) reflectance spectra for mixtures of ordered kaolinite and ordered dickite have been found to simulate the spectral response of disordered kaolinite. The amount of octahedral vacancy disorder in nine disordered kaolinite samples was estimated by comparing the sample spectra to the spectra of reference mixtures. The resulting estimates are consistent with previously published estimates of vacancy disorder for similar kaolin minerals that were modeled from calculated X-ray diffraction patterns. The ordered kaolinite and dickite samples used in the reference mixtures were carefully selected to avoid undesirable particle size effects that could bias the spectral results.NIR spectra were also recorded for laboratory mixtures of ordered kaolinite and halloysite to assess whether the spectra could be potentially useful for determining mineral proportions in natural physical mixtures of these two clays. Although the kaolinite-halloysite proportions could only be roughly estimated from the mixture spectra, the halloysite component was evident even when halloysite was present in only minor amounts. A similar approach using NIR spectra for laboratory mixtures may have applications in other studies of natural clay mixtures.
Krishnamurthy, Krish
2013-12-01
The intrinsic quantitative nature of NMR is increasingly exploited in areas ranging from complex mixture analysis (as in metabolomics and reaction monitoring) to quality assurance/control. Complex NMR spectra are more common than not, and therefore, extraction of quantitative information generally involves significant prior knowledge and/or operator interaction to characterize resonances of interest. Moreover, in most NMR-based metabolomic experiments, the signals from metabolites are normally present as a mixture of overlapping resonances, making quantification difficult. Time-domain Bayesian approaches have been reported to be better than conventional frequency-domain analysis at identifying subtle changes in signal amplitude. We discuss an approach that exploits Bayesian analysis to achieve a complete reduction to amplitude frequency table (CRAFT) in an automated and time-efficient fashion - thus converting the time-domain FID to a frequency-amplitude table. CRAFT uses a two-step approach to FID analysis. First, the FID is digitally filtered and downsampled to several sub FIDs, and secondly, these sub FIDs are then modeled as sums of decaying sinusoids using the Bayesian approach. CRAFT tables can be used for further data mining of quantitative information using fingerprint chemical shifts of compounds of interest and/or statistical analysis of modulation of chemical quantity in a biological study (metabolomics) or process study (reaction monitoring) or quality assurance/control. The basic principles behind this approach as well as results to evaluate the effectiveness of this approach in mixture analysis are presented. Copyright © 2013 John Wiley & Sons, Ltd.
Fish Consumption Advisories: Toward a Unified, Scientifically Credible Approach
A model is proposed for fish consumption advisories based on consensus-derived risk assessment values for common contaminants in fish and the latest risk assessment methods. he model accounts in part for the expected toxicity to mixtures of chemicals, the underlying uncertainties...
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
A mixture model for robust registration in Kinect sensor
NASA Astrophysics Data System (ADS)
Peng, Li; Zhou, Huabing; Zhu, Shengguo
2018-03-01
The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.
NASA Astrophysics Data System (ADS)
Natarajan, Jayaprakash
Coal derived synthetic gas (syngas) fuel is a promising solution for today's increasing demand for clean and reliable power. Syngas fuels are primarily mixtures of H2 and CO, often with large amounts of diluents such as N2, CO2, and H2O. The specific composition depends upon the fuel source and gasification technique. This requires gas turbine designers to develop fuel flexible combustors capable of operating with high conversion efficiency while maintaining low emissions for a wide range of syngas tact mixtures. Design tools often used in combustor development require data on various fundamental gas combustion properties. For example, laminar flame speed is often an input as it has a significant impact upon the size and static stability of the combustor. Moreover it serves as a good validation parameter for leading kinetic models used for detailed combustion simulations. Thus the primary objective of this thesis is measurement of laminar flame speeds of syngas fuel mixtures at conditions relevant to ground-power gas turbines. To accomplish this goal, two flame speed measurement approaches were developed: a Bunsen flame approach modified to use the reaction zone area in order to reduce the influence of flame curvature on the measured flame speed and a stagnation flame approach employing a rounded bluff body. The modified Bunsen flame approach was validated against stretch-corrected approaches over a range of fuels and test conditions; the agreement is very good (less than 10% difference). Using the two measurement approaches, extensive flame speed information were obtained for lean syngas mixtures at a range of conditions: (1) 5 to 100% H2 in the H2/CO fuel mixture; (2) 300-700 K preheat temperature; (3) 1 to 15 atm pressure, and (4) 0-70% dilution with CO2 or N2. The second objective of this thesis is to use the flame speed data to validate leading kinetic mechanisms for syngas combustion. Comparisons of the experimental flame speeds to those predicted using detailed numerical simulations of strained and untrained laminar flames indicate that all the current kinetic mechanisms tend to over predict the increase in flame speed with preheat temperature for medium and high H2 content fuel mixtures. A sensitivity analysis that includes reported uncertainties in rate constants reveals that the errors in the rate constants of the reactions involving HO 2 seem to be the most likely cause for the observed higher preheat temperature dependence of the flame speeds. To enhance the accuracy of the current models, a more detailed sensitivity analysis based on temperature dependent reaction rate parameters should be considered as the problem seems to be in the intermediate temperature range (˜800-1200 K).
Review: Modelling chemical kinetics and convective heating in giant planet entries
NASA Astrophysics Data System (ADS)
Reynier, Philippe; D'Ammando, Giuliano; Bruno, Domenico
2018-01-01
A review of the existing chemical kinetics models for H2 / He mixtures and related transport and thermodynamic properties is presented as a pre-requisite towards the development of innovative models based on the state-to-state approach. A survey of the available results obtained during the mission preparation and post-flight analyses of the Galileo mission has been undertaken and a computational matrix has been derived. Different chemical kinetics schemes for hydrogen/helium mixtures have been applied to numerical simulations of the selected points along the entry trajectory. First, a reacting scheme, based on literature data, has been set up for computing the flow-field around the probe at high altitude and comparisons with existing numerical predictions are performed. Then, a macroscopic model derived from a state-to-state model has been constructed and incorporated into a CFD code. Comparisons with existing numerical results from the literature have been performed as well as cross-check comparisons between the predictions provided by the different models in order to evaluate the potential of innovative chemical kinetics models based on the state-to-state approach.
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
DOT National Transportation Integrated Search
2001-06-01
A mechanistic approach to fatigue characterization of asphalt-aggregate mixtures is presented in this volume. This approach is founded on a uniaxial viscoelastic correspondence principle is applied in order to evaluate damage growth and healing in cy...
Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.
Tan, Yin-Yin Sarah; Mueller, Shane T
2016-09-01
Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles
NASA Astrophysics Data System (ADS)
Zekavat, Behrooz; Solouki, Touradj
2012-11-01
We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.
Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-01-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548
Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants
NASA Astrophysics Data System (ADS)
Bernhoff, Niclas
2018-05-01
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures
NASA Astrophysics Data System (ADS)
Elwardany, Michael Dawoud
The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in order to save taxpayers' money.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Accommodating Missing Data in Mixture Models for Classification by Opinion-Changing Behavior.
ERIC Educational Resources Information Center
Hill, Jennifer L.
2001-01-01
Explored the assumptions implicit in models reflecting three different approaches to missing survey response data using opinion data collected from Swiss citizens at four time points over nearly 2 years. Results suggest that the latently ignorable model has the least restrictive structural assumptions. Discusses the idea of "durable…
Mechanism-based classification of PAH mixtures to predict carcinogenic potential
Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; ...
2015-04-22
We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[ a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[ def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profilesmore » measured in skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.« less
Mechanism-Based Classification of PAH Mixtures to Predict Carcinogenic Potential.
Tilton, Susan C; Siddens, Lisbeth K; Krueger, Sharon K; Larkin, Andrew J; Löhr, Christiane V; Williams, David E; Baird, William M; Waters, Katrina M
2015-07-01
We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway-based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP, or environmental PAH mixtures (Mix 1-3) following a 2-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC > BaP = Mix2 = Mix3 > Mix1 = Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared with tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (P < .05) for DNA damage, apoptosis, response to chemical stimulus, and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. These data further provide a 'source-to-outcome' model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action-based risk assessment could be employed for environmental PAH mixtures. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mathematical modeling of a single stage ultrasonically assisted distillation process.
Mahdi, Taha; Ahmad, Arshad; Ripin, Adnan; Abdullah, Tuan Amran Tuan; Nasef, Mohamed M; Ali, Mohamad W
2015-05-01
The ability of sonication phenomena in facilitating separation of azeotropic mixtures presents a promising approach for the development of more intensified and efficient distillation systems than conventional ones. To expedite the much-needed development, a mathematical model of the system based on conservation principles, vapor-liquid equilibrium and sonochemistry was developed in this study. The model that was founded on a single stage vapor-liquid equilibrium system and enhanced with ultrasonic waves was coded using MATLAB simulator and validated with experimental data for ethanol-ethyl acetate mixture. The effects of both ultrasonic frequency and intensity on the relative volatility and azeotropic point were examined, and the optimal conditions were obtained using genetic algorithm. The experimental data validated the model with a reasonable accuracy. The results of this study revealed that the azeotropic point of the mixture can be totally eliminated with the right combination of sonication parameters and this can be utilized in facilitating design efforts towards establishing a workable ultrasonically intensified distillation system. Copyright © 2014 Elsevier B.V. All rights reserved.
Large-eddy simulation of turbulent cavitating flow in a micro channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.
2014-08-15
Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less
Picker, K M; Bikane, F
2001-08-01
The aim of the study is to use the 3D modeling technique of compaction cycles for analysis of binary and ternary mixtures. Three materials with very different deformation and densification characteristics [cellulose acetate (CAC), dicalcium phosphate dihydrate (EM) and theophylline monohydrate (TM)] have been tableted at graded maximum relative densities (rhorel, max) on an eccentric tableting machine. Following that, graded binary mixtures from CAC and EM have been compacted. Finally, the same ratios of CAC and EM have been tableted in a ternary mixture with 20 vol% TM. All compaction cycles have been analyzed by using different data analysis methods. Three-dimensional modeling, conventional determination of the slope of the Heckel function, determination of the elastic recovery during decompression, and calculations according to the pressure-time function were the methods of choice. The results show that the 3D model technique is able to gain the information in one step instead of three different approaches, which is an advantage for formulation development. The results show that this model enables one to better distinguish the compaction properties of mixtures and the interaction of the components in the tablet than 2D models. Furthermore, the information by 3D modeling is more precise since in the slope K of the Heckel-plot (in die) elasticity is included, and in the parameters of the pressure-time function beta and gamma plastic deformation due to pressure is included. The influence of time and pressure on the displacement can now be differentiated.
Rodea-Palomares, Ismael; González-Pleiter, Miguel; Martín-Betancor, Keila; Rosal, Roberto; Fernández-Piñas, Francisca
2015-01-01
Understanding the effects of exposure to chemical mixtures is a common goal of pharmacology and ecotoxicology. In risk assessment-oriented ecotoxicology, defining the scope of application of additivity models has received utmost attention in the last 20 years, since they potentially allow one to predict the effect of any chemical mixture relying on individual chemical information only. The gold standard for additivity in ecotoxicology has demonstrated to be Loewe additivity which originated the so-called Concentration Addition (CA) additivity model. In pharmacology, the search for interactions or deviations from additivity (synergism and antagonism) has similarly captured the attention of researchers over the last 20 years and has resulted in the definition and application of the Combination Index (CI) Theorem. CI is based on Loewe additivity, but focused on the identification and quantification of synergism and antagonism. Despite additive models demonstrating a surprisingly good predictive power in chemical mixture risk assessment, concerns still exist due to the occurrence of unpredictable synergism or antagonism in certain experimental situations. In the present work, we summarize the parallel history of development of CA, IA, and CI models. We also summarize the applicability of these concepts in ecotoxicology and how their information may be integrated, as well as the possibility of prediction of synergism. Inside the box, the main question remaining is whether it is worthy to consider departures from additivity in mixture risk assessment and how to predict interactions among certain mixture components. Outside the box, the main question is whether the results observed under the experimental constraints imposed by fractional approaches are a de fide reflection of what it would be expected from chemical mixtures in real world circumstances. PMID:29051468
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Larval aquatic insect responses to cadmium and zinc in experimental streams.
Mebane, Christopher A; Schmidt, Travis S; Balistrieri, Laurie S
2017-03-01
To evaluate the risks of metal mixture effects to natural stream communities under ecologically relevant conditions, the authors conducted 30-d tests with benthic macroinvertebrates exposed to cadmium (Cd) and zinc (Zn) in experimental streams. The simultaneous exposures were with Cd and Zn singly and with Cd+Zn mixtures at environmentally relevant ratios. The tests produced concentration-response patterns that for individual taxa were interpreted in the same manner as classic single-species toxicity tests and for community metrics such as taxa richness and mayfly (Ephemeroptera) abundance were interpreted in the same manner as with stream survey data. Effect concentrations from the experimental stream exposures were usually 2 to 3 orders of magnitude lower than those from classic single-species tests. Relative to a response addition model, which assumes that the joint toxicity of the mixtures can be predicted from the product of their responses to individual toxicants, the Cd+Zn mixtures generally showed slightly less than additive toxicity. The authors applied a modeling approach called Tox to explore the mixture toxicity results and to relate the experimental stream results to field data. The approach predicts the accumulation of toxicants (hydrogen, Cd, and Zn) on organisms using a 2-pK a bidentate model that defines interactions between dissolved cations and biological receptors (biotic ligands) and relates that accumulation through a logistic equation to biological response. The Tox modeling was able to predict Cd+Zn mixture responses from the single-metal exposures as well as responses from field data. The similarity of response patterns between the 30-d experimental stream tests and field data supports the environmental relevance of testing aquatic insects in experimental streams. Environ Toxicol Chem 2017;36:749-762. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America. Published 2016 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
Robinson, David B.; Luo, Weifang; Cai, Trevor Y.; ...
2015-09-26
Gaseous mixtures of diatomic hydrogen isotopologues and helium are often encountered in the nuclear energy industry and in analytical chemistry. Compositions of stored mixtures can vary due to interactions with storage and handling materials. When tritium is present, it decays to form ions and helium-3, both of which can lead to further compositional variation. Monitoring of composition is typically achieved by mass spectrometry, a method that is bulky and energy-intensive. Mass spectrometers disperse sample material through vacuum pumps, which is especially troublesome if tritium is present. Moreover, our ultimate goal is to create a compact, fast, low-power sensor that canmore » determine composition with minimal gas consumption and waste generation, as a complement to mass spectrometry that can be instantiated more widely. We propose calorimetry of metal hydrides as an approach to this, due to the strong isotope effect on gas absorption, and demonstrate the sensitivity of measured heat flow to atomic composition of the gas. Peak shifts are discernible when mole fractions change by at least 1%. A mass flow restriction results in a unique dependence of the measurement on helium concentration. We present a mathematical model as a first step toward prediction of the peak shapes and positions. The model includes a useful method to compute estimates of phase diagrams for palladium in the presence of arbitrary mixtures of hydrogen isotopologues. As a result, we expect that this approach can be used to deduce unknown atomic compositions from measured calorimetric data over a useful range of partial pressures of each component.« less
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2017-07-01
A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Green, Robert O.; Sabol, Donald E.; Adams, John B.
1993-01-01
Imaging spectrometry offers a new way of deriving ecological information about vegetation communities from remote sensing. Applications include derivation of canopy chemistry, measurement of column atmospheric water vapor and liquid water, improved detectability of materials, more accurate estimation of green vegetation cover and discrimination of spectrally distinct green leaf, non-photosynthetic vegetation (NPV: litter, wood, bark, etc.) and shade spectra associated with different vegetation communities. Much of our emphasis has been on interpreting Airborne Visible/Infrared Imaging Spectrometry (AVIRIS) data spectral mixtures. Two approaches have been used, simple models, where the data are treated as a mixture of 3 to 4 laboratory/field measured spectra, known as reference endmembers (EM's), applied uniformly to the whole image, to more complex models where both the number of EM's and the types of EM's vary on a per-pixel basis. Where simple models are applied, materials, such as NPV, which are spectrally similar to soils, can be discriminated on the basis of residual spectra. One key aspect is that the data are calibrated to reflectance and modeled as mixtures of reference EM's, permitting temporal comparison of EM fractions, independent of scene location or data type. In previous studies the calibration was performed using a modified-empirical line calibration, assuming a uniform atmosphere across the scene. In this study, a Modtran-based calibration approach was used to map liquid water and atmospheric water vapor and retrieve surface reflectance from three AVIRIS scenes acquired in 1992 over the Jasper Ridge Biological Preserve. The data were acquired on June 2nd, September 4th and October 6th. Reflectance images were analyzed as spectral mixtures of reference EM's using a simple 4 EM model. Atmospheric water vapor derived from Modtran was compared to elevation, and community type. Liquid water was compare to the abundance of NPV, Shade and Green Vegetation (VG) for select sites to determine whether a relationship existed, and under what conditions the relationship broke down. Temporal trends in endmember fractions, liquid water and atmospheric water vapor were investigated also. The combination of spectral mixture analysis and the Modtran based atmospheric/liquid water models was used to develop a unique vegetation community description.
A Novel Approach for Evaluating Carbamate Mixtures for Dose Additivity
Two mathematical approaches were used to test the hypothesis ofdose-addition for a binary and a seven-chemical mixture ofN-methyl carbamates, toxicologically similar chemicals that inhibit cholinesterase (ChE). In the more novel approach, mixture data were not included in the ana...
Root, Katharina; Wittwer, Yves; Barylyuk, Konstantin; Anders, Ulrike; Zenobi, Renato
2017-09-01
Native ESI-MS is increasingly used for quantitative analysis of biomolecular interactions. In such analyses, peak intensity ratios measured in mass spectra are treated as abundance ratios of the respective molecules in solution. While signal intensities of similar-size analytes, such as a protein and its complex with a small molecule, can be directly compared, significant distortions of the peak ratio due to unequal signal response of analytes impede the application of this approach for large oligomeric biomolecular complexes. We use a model system based on concatenated maltose binding protein units (MBPn, n = 1, 2, 3) to systematically study the behavior of protein mixtures in ESI-MS. The MBP concatamers differ from each other only by their mass while the chemical composition and other properties remain identical. We used native ESI-MS to analyze model mixtures of MBP oligomers, including equimolar mixtures of two proteins, as well as binary mixtures containing different fractions of the individual components. Pronounced deviation from a linear dependence of the signal intensity with concentration was observed for all binary mixtures investigated. While equimolar mixtures showed linear signal dependence at low concentrations, distinct ion suppression was observed above 20 μM. We systematically studied factors that are most often used in the literature to explain the origin of suppression effects. Implications of this effect for quantifying protein-protein binding affinity by native ESI-MS are discussed in general and demonstrated for an example of an anti-MBP antibody with its ligand, MBP. Graphical Abstract ᅟ.
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Son, Heesook; Friedmann, Erika; Thomas, Sue A
2012-01-01
Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.
A Four Step Approach to Evaluate Mixtures for Consistency with Dose Addition
We developed a four step approach for evaluating chemical mixture data for consistency with dose addition for use in environmental health risk assessment. Following the concepts in the U.S. EPA mixture risk guidance (EPA 2000a,b), toxicological interaction for a defined mixture (...
Modeling of non-thermal plasma in flammable gas mixtures
NASA Astrophysics Data System (ADS)
Napartovich, A. P.; Kochetov, I. V.; Leonov, S. B.
2008-07-01
An idea of using plasma-assisted methods of fuel ignition is based on non-equilibrium generation of chemically active species that speed up the combustion process. It is believed that gain in energy consumed for combustion acceleration by plasmas is due to the non-equilibrium nature of discharge plasma, which allows radicals to be produced in an above-equilibrium amount. Evidently, the size of the effect is strongly dependent on the initial temperature, pressure, and composition of the mixture. Of particular interest is comparison between thermal ignition of a fuel-air mixture and non-thermal plasma initiation of the combustion. Mechanisms of thermal ignition in various fuel-air mixtures have been studied for years, and a number of different mechanisms are known providing an agreement with experiments at various conditions. The problem is -- how to conform thermal chemistry approach to essentially non-equilibrium plasma description. The electric discharge produces much above-equilibrium amounts of chemically active species: atoms, radicals and ions. The point is that despite excess concentrations of a number of species, total concentration of these species is far below concentrations of the initial gas mixture. Therefore, rate coefficients for reactions of these discharge produced species with other gas mixture components are well known quantities controlled by the translational temperature, which can be calculated from the energy balance equation taking into account numerous processes initiated by plasma. A numerical model was developed combining traditional approach of thermal combustion chemistry with advanced description of the plasma kinetics based on solution of electron Boltzmann equation. This approach allows us to describe self-consistently strongly non-equilibrium electric discharge in chemically unstable (ignited) gas. Equations of pseudo-one-dimensional gas dynamics were solved in parallel with a system of thermal chemistry equations, kinetic equations for charged particles (electrons, positive and negative ions), and with the electric circuit equation. The electric circuit comprises power supply, ballast resistor connected in series with the discharge and capacity. Rate coefficients for electron-assisted reactions were calculated from solving the two-term spherical harmonic expansion of the Boltzmann equation. Such an approach allows us to describe influence of thermal chemistry reactions (burning) on the discharge characteristics. Results of comparison between the discharge and thermal ignition effects for mixtures of hydrogen or ethylene with dry air will be reported. Effects of acceleration of ignition by discharge plasma will be analyzed. In particular, the role of singlet oxygen produced effectively in the discharge in ignition speeding up will be discussed.
Defining an additivity framework for mixture research in inducible whole-cell biosensors
NASA Astrophysics Data System (ADS)
Martin-Betancor, K.; Ritz, C.; Fernández-Piñas, F.; Leganés, F.; Rodea-Palomares, I.
2015-11-01
A novel additivity framework for mixture effect modelling in the context of whole cell inducible biosensors has been mathematically developed and implemented in R. The proposed method is a multivariate extension of the effective dose (EDp) concept. Specifically, the extension accounts for differential maximal effects among analytes and response inhibition beyond the maximum permissive concentrations. This allows a multivariate extension of Loewe additivity, enabling direct application in a biphasic dose-response framework. The proposed additivity definition was validated, and its applicability illustrated by studying the response of the cyanobacterial biosensor Synechococcus elongatus PCC 7942 pBG2120 to binary mixtures of Zn, Cu, Cd, Ag, Co and Hg. The novel method allowed by the first time to model complete dose-response profiles of an inducible whole cell biosensor to mixtures. In addition, the approach also allowed identification and quantification of departures from additivity (interactions) among analytes. The biosensor was found to respond in a near additive way to heavy metal mixtures except when Hg, Co and Ag were present, in which case strong interactions occurred. The method is a useful contribution for the whole cell biosensors discipline and related areas allowing to perform appropriate assessment of mixture effects in non-monotonic dose-response frameworks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
A mixture toxicity approach to predict the toxicity of Ag decorated ZnO nanomaterials.
Azevedo, S L; Holz, T; Rodrigues, J; Monteiro, T; Costa, F M; Soares, A M V M; Loureiro, S
2017-02-01
Nanotechnology is a rising field and nanomaterials can now be found in a vast variety of products with different chemical compositions, sizes and shapes. New nanostructures combining different nanomaterials are being developed due to their enhancing characteristics when compared to nanomaterials alone. In the present study, the toxicity of a nanostructure composed by a ZnO nanomaterial with Ag nanomaterials on its surface (designated as ZnO/Ag nanostructure) was assessed using the model-organism Daphnia magna and its toxicity predicted based on the toxicity of the single components (Zn and Ag). For that ZnO and Ag nanomaterials as single components, along with its mixture prepared in the laboratory, were compared in terms of toxicity to ZnO/Ag nanostructures. Toxicity was assessed by immobilization and reproduction tests. A mixture toxicity approach was carried out using as starting point the conceptual model of Concentration Addition. The laboratory mixture of both nanomaterials showed that toxicity was dependent on the doses of ZnO and Ag used (immobilization) or presented a synergistic pattern (reproduction). The ZnO/Ag nanostructure toxicity prediction, based on the percentage of individual components, showed an increase in toxicity when compared to the expected (immobilization) and dependent on the concentration used (reproduction). This study demonstrates that the toxicity of the prepared mixture of ZnO and Ag and of the ZnO/Ag nanostructure cannot be predicted based on the toxicity of their components, highlighting the importance of taking into account the interaction between nanomaterials when assessing hazard and risk. Copyright © 2016 Elsevier B.V. All rights reserved.
Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C
2005-07-01
Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica
2014-06-01
Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.
A phase field approach for multicellular aggregate fusion in biofabrication.
Yang, Xiaofeng; Sun, Yi; Wang, Qi
2013-07-01
We present a modeling and computational approach to study fusion of multicellular aggregates during tissue and organ fabrication, which forms the foundation for the scaffold-less biofabrication of tissues and organs known as bioprinting. It is known as the phase field method, where multicellular aggregates are modeled as mixtures of multiphase complex fluids whose phase mixing or separation is governed by interphase force interactions, mimicking the cell-cell interaction in the multicellular aggregates, and intermediate range interaction mediated by the surrounding hydrogel. The material transport in the mixture is dictated by hydrodynamics as well as forces due to the interphase interactions. In a multicellular aggregate system with fixed number of cells and fixed amount of the hydrogel medium, the effect of cell differentiation, proliferation, and death are neglected in the current model, which can be readily included in the model, and the interaction between different components is dictated by the interaction energy between cell and cell as well as between cell and medium particles, respectively. The modeling approach is applicable to transient simulations of fusion of cellular aggregate systems at the time and length scale appropriate to biofabrication. Numerical experiments are presented to demonstrate fusion and cell sorting during tissue and organ maturation processes in biofabrication.
Dennison, James E; Andersen, Melvin E; Yang, Raymond S H
2003-09-01
Gasoline consists of a few toxicologically significant components and a large number of other hydrocarbons in a complex mixture. By using an integrated, physiologically based pharmacokinetic (PBPK) modeling and lumping approach, we have developed a method for characterizing the pharmacokinetics (PKs) of gasoline in rats. The PBPK model tracks selected target components (benzene, toluene, ethylbenzene, o-xylene [BTEX], and n-hexane) and a lumped chemical group representing all nontarget components, with competitive metabolic inhibition between all target compounds and the lumped chemical. PK data was acquired by performing gas uptake PK studies with male F344 rats in a closed chamber. Chamber air samples were analyzed every 10-20 min by gas chromatography/flame ionization detection and all nontarget chemicals were co-integrated. A four-compartment PBPK model with metabolic interactions was constructed using the BTEX, n-hexane, and lumped chemical data. Target chemical kinetic parameters were refined by studies with either the single chemical alone or with all five chemicals together. o-Xylene, at high concentrations, decreased alveolar ventilation, consistent with respiratory irritation. A six-chemical interaction model with the lumped chemical group was used to estimate lumped chemical partitioning and metabolic parameters for a winter blend of gasoline with methyl t-butyl ether and a summer blend without any oxygenate. Computer simulation results from this model matched well with experimental data from single chemical, five-chemical mixture, and the two blends of gasoline. The PBPK model analysis indicated that metabolism of individual components was inhibited up to 27% during the 6-h gas uptake experiments of gasoline exposures.
Yan, Luchun; Liu, Jiemin; Jiang, Shen; Wu, Chuandong; Gao, Kewei
2017-07-13
The olfactory evaluation function (e.g., odor intensity rating) of e-nose is always one of the most challenging issues in researches about odor pollution monitoring. But odor is normally produced by a set of stimuli, and odor interactions among constituents significantly influenced their mixture's odor intensity. This study investigated the odor interaction principle in odor mixtures of aldehydes and esters, respectively. Then, a modified vector model (MVM) was proposed and it successfully demonstrated the similarity of the odor interaction pattern among odorants of the same type. Based on the regular interaction pattern, unlike a determined empirical model only fit for a specific odor mixture in conventional approaches, the MVM distinctly simplified the odor intensity prediction of odor mixtures. Furthermore, the MVM also provided a way of directly converting constituents' chemical concentrations to their mixture's odor intensity. By combining the MVM with usual data-processing algorithm of e-nose, a new e-nose system was established for an odor intensity rating. Compared with instrumental analysis and human assessor, it exhibited accuracy well in both quantitative analysis (Pearson correlation coefficient was 0.999 for individual aldehydes ( n = 12), 0.996 for their binary mixtures ( n = 36) and 0.990 for their ternary mixtures ( n = 60)) and odor intensity assessment (Pearson correlation coefficient was 0.980 for individual aldehydes ( n = 15), 0.973 for their binary mixtures ( n = 24), and 0.888 for their ternary mixtures ( n = 25)). Thus, the observed regular interaction pattern is considered an important foundation for accelerating extensive application of olfactory evaluation in odor pollution monitoring.
Míguez, J M; Piñeiro, M M; Algaba, J; Mendiboure, B; Torré, J P; Blas, F J
2015-11-05
The high-pressure phase diagrams of the tetrahydrofuran(1) + carbon dioxide(2), + methane(2), and + water(2) mixtures are examined using the SAFT-VR approach. Carbon dioxide molecule is modeled as two spherical segments tangentially bonded, water is modeled as a spherical segment with four associating sites to represent the hydrogen bonding, methane is represented as an isolated sphere, and tetrahydrofuran is represented as a chain of m tangentially bonded spherical segments. Dispersive interactions are modeled using the square-well intermolecular potential. In addition, two different molecular model mixtures are developed to take into account the subtle balance between water-tetrahydrofuran hydrogen-bonding interactions. The polar and quadrupolar interactions present in water, tetrahydrofuran, and carbon dioxide are treated in an effective way via square-well potentials of variable range. The optimized intermolecular parameters are taken from the works of Giner et al. (Fluid Phase Equil. 2007, 255, 200), Galindo and Blas (J. Phys. Chem. B 2002, 106, 4503), Patel et al. (Ind. Eng. Chem. Res. 2003, 42, 3809), and Clark et al. (Mol. Phys. 2006, 104, 3561) for tetrahydrofuran, carbon dioxide, methane, and water, respectively. The phase diagrams of the binary mixtures exhibit different types of phase behavior according to the classification of van Konynenburg and Scott, ranging from types I, III, and VI phase behavior for the tetrahydrofuran(1) + carbon dioxide(2), + methane(2), and + water(2) binary mixtures, respectively. This last type is characterized by the presence of a Bancroft point, positive azeotropy, and the so-called closed-loop curves that represent regions of liquid-liquid immiscibility in the phase diagram. The system exhibits lower critical solution temperatures (LCSTs), which denote the lower limit of immiscibility together with upper critical solution temperatures (UCSTs). This behavior is explained in terms of competition between the incompatibility with the alkyl parts of the tetrahydrofuran ring chain and the hydrogen bonding between water and the ether group. A minimum number of unlike interaction parameters are fitted to give the optimal representation of the most representative features of the binary phase diagrams. In the particular case of tetrahydrofuran(1) + water(2), two sets of intermolecular potential model parameters are proposed to describe accurately either the hypercritical point associated with the closed-loop liquid-liquid immiscibility region or the location of the mixture lower- and upper-critical end-points. The theory is not only able to predict the type of phase behavior of each mixture, but also provides a reasonably good description of the global phase behavior whenever experimental data are available.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
NASA Astrophysics Data System (ADS)
Miftahurrohmah, Brina; Iriawan, Nur; Fithriasari, Kartika
2017-06-01
Stocks are known as the financial instruments traded in the capital market which have a high level of risk. Their risks are indicated by their uncertainty of their return which have to be accepted by investors in the future. The higher the risk to be faced, the higher the return would be gained. Therefore, the measurements need to be made against the risk. Value at Risk (VaR) as the most popular risk measurement method, is frequently ignore when the pattern of return is not uni-modal Normal. The calculation of the risks using VaR method with the Normal Mixture Autoregressive (MNAR) approach has been considered. This paper proposes VaR method couple with the Mixture Laplace Autoregressive (MLAR) that would be implemented for analysing the first three biggest capitalization Islamic stock return in JII, namely PT. Astra International Tbk (ASII), PT. Telekomunikasi Indonesia Tbk (TLMK), and PT. Unilever Indonesia Tbk (UNVR). Parameter estimation is performed by employing Bayesian Markov Chain Monte Carlo (MCMC) approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Quantifying structural states of soft mudrocks
NASA Astrophysics Data System (ADS)
Li, B.; Wong, R. C. K.
2016-05-01
In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.
Dunne, Lawrence J; Manos, George
2018-03-13
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO 2 and CH 4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO 2 and CH 4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Manos, George
2018-03-01
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO2 and CH4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO2 and CH4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes. This article is part of the theme issue `Modern theoretical chemistry'.
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
Assessment of bitterness intensity and suppression effects using an Electronic Tongue
NASA Astrophysics Data System (ADS)
Legin, A.; Rudnitskaya, A.; Kirsanov, D.; Frolova, Yu.; Clapham, D.; Caricofe, R.
2009-05-01
Quantification of bitterness intensity and effectivness of bitterness suppression of a novel active pharmacological ingredient (API) being developed by GSK was performed using an Electronic Tongue (ET) based on potentiometric chemical sensors. Calibration of the ET was performed with solutions of quinine hydrochloride in the concentration range 0.4-360 mgL-1. An MLR calibration model was developed for predicting bitterness intensity expressed as "equivalent quinine concentration" of a series of solutions of quinine, bittrex and the API. Additionally the effectiveness of sucralose, mixture of aspartame and acesulfame K, and grape juice in masking the bitter taste of the API was assessed using two approaches. PCA models were produced and distances between compound containing solutions and corresponding placebos were calculated. The other approach consisted in calculating "equivalent quinine concentration" using a calibration model with respect to quinine concentration. According to both methods, the most effective taste masking was produced by grape juice, followed by the mixture of aspartame and acesulfame K.
Munkin, Murat K; Trivedi, Pravin K
2010-09-01
This paper takes a finite mixture approach to model heterogeneity in incentive and selection effects of drug coverage on total drug expenditure among the Medicare elderly US population. Evidence is found that the positive drug expenditures of the elderly population can be decomposed into two groups different in the identified selection effects and interpreted as relatively healthy with lower average expenditures and relatively unhealthy with higher average expenditures, accounting for approximately 25 and 75% of the population, respectively. Adverse selection into drug insurance appears to be strong for the higher expenditure component and weak for the lower expenditure group. Copyright (c) 2010 John Wiley & Sons, Ltd.
Hyperspectral target detection using heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2009-09-01
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.
NASA Astrophysics Data System (ADS)
Rao, R. R.
2015-12-01
Aerosol radiative forcing estimates with high certainty are required in climate change studies. The approach in estimating the aerosol radiative forcing by using the chemical composition of aerosols is not effective as the chemical composition data with radiative properties are not widely available. In this study we look into the approach where ground based spectral radiation flux measurements along with an RT model is used to estimate radiative forcing. Measurements of spectral flux were made using an ASD spectroradiometer with 350 - 1050 nm wavelength range and 3nm resolution for around 54 clear-sky days during which AOD range was around 0.1 to 0.7. Simultaneous measurements of black carbon were also made using Aethalometer (Magee Scientific) which ranged from around 1.5 ug/m3 to 8 ug/m3. All the measurements were made in the campus of Indian Institute of Science which is in the heart of Bangalore city. The primary study involved in understanding the sensitivity of spectral flux to change in the mass concentration of individual aerosol species (Optical properties of Aerosols and Clouds -OPAC classified aerosol species) using the SBDART RT model. This made us clearly distinguish the region of influence of different aerosol species on the spectral flux. Following this, a new technique has been introduced to estimate an optically equivalent mixture of aerosol species for the given location. The new method involves an iterative process where the mixture of aerosol species are changed in OPAC model and RT model is run as long as the mixture which mimics the measured spectral flux within 2-3% deviation from measured spectral flux is obtained. Using the optically equivalent aerosol mixture and RT model aerosol radiative forcing is estimated. The new method is limited to clear sky scenes and its accuracy to derive an optically equivalent aerosol mixture reduces when diffuse component of flux increases. Our analysis also showed that direct component of spectral flux is more sensitive to different aerosol species than total spectral flux which was also supported by our observed data.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom
2012-01-01
Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
1995-12-01
34 Environmental Science and Technology, 26:1404-1410 (July 1992). 4. Atlas , Ronald M. and Richard Bartha . Microbial Ecology , Fundamentals and Applica...the impact of physical factors on microbial activity. They cite research by Atlas and Bartha observing that low temperatures inhibit microbial activity...mixture. Atlas and Bartha (4:393-394) explain that a typical petroleum mixture includes aliphatics, alicyclics, aromatics and other organics. The
The Presentation describes the advantages and challenges of working with Whole Mixtures, discusses an exploratory approach for evaluating sufficient similarity, and challenges of applying such approaches to other environmental mixtures.
An a priori DNS study of the shadow-position mixing model
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...
2016-01-15
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
Quantile regression in the presence of monotone missingness with sensitivity analysis
Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.
2016-01-01
In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008
Haxhimali, Tomorr; Rudd, Robert E; Cabot, William H; Graziani, Frank R
2015-11-01
We present molecular dynamics (MD) calculations of shear viscosity for asymmetric mixed plasma for thermodynamic conditions relevant to astrophysical and inertial confinement fusion plasmas. Specifically, we consider mixtures of deuterium and argon at temperatures of 100-500 eV and a number density of 10^{25} ions/cc. The motion of 30,000-120,000 ions is simulated in which the ions interact via the Yukawa (screened Coulomb) potential. The electric field of the electrons is included in this effective interaction; the electrons are not simulated explicitly. Shear viscosity is calculated using the Green-Kubo approach with an integral of the shear stress autocorrelation function, a quantity calculated in the equilibrium MD simulations. We systematically study different mixtures through a series of simulations with increasing fraction of the minority high-Z element (Ar) in the D-Ar plasma mixture. In the more weakly coupled plasmas, at 500 eV and low Ar fractions, results from MD compare very well with Chapman-Enskog kinetic results. In the more strongly coupled plasmas, the kinetic theory does not agree well with the MD results. We develop a simple model that interpolates between classical kinetic theories at weak coupling and the Murillo Yukawa viscosity model at higher coupling. This hybrid kinetics-MD viscosity model agrees well with the MD results over the conditions simulated, ranging from moderately weakly coupled to moderately strongly coupled asymmetric plasma mixtures.
NASA Astrophysics Data System (ADS)
Haxhimali, Tomorr; Rudd, Robert E.; Cabot, William H.; Graziani, Frank R.
2015-11-01
We present molecular dynamics (MD) calculations of shear viscosity for asymmetric mixed plasma for thermodynamic conditions relevant to astrophysical and inertial confinement fusion plasmas. Specifically, we consider mixtures of deuterium and argon at temperatures of 100-500 eV and a number density of 1025 ions/cc. The motion of 30 000-120 000 ions is simulated in which the ions interact via the Yukawa (screened Coulomb) potential. The electric field of the electrons is included in this effective interaction; the electrons are not simulated explicitly. Shear viscosity is calculated using the Green-Kubo approach with an integral of the shear stress autocorrelation function, a quantity calculated in the equilibrium MD simulations. We systematically study different mixtures through a series of simulations with increasing fraction of the minority high-Z element (Ar) in the D-Ar plasma mixture. In the more weakly coupled plasmas, at 500 eV and low Ar fractions, results from MD compare very well with Chapman-Enskog kinetic results. In the more strongly coupled plasmas, the kinetic theory does not agree well with the MD results. We develop a simple model that interpolates between classical kinetic theories at weak coupling and the Murillo Yukawa viscosity model at higher coupling. This hybrid kinetics-MD viscosity model agrees well with the MD results over the conditions simulated, ranging from moderately weakly coupled to moderately strongly coupled asymmetric plasma mixtures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Dardick, William R.; Mislevy, Robert J.
2016-01-01
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
State of research: environmental pathways and food chain transfer.
Vaughan, B E
1984-01-01
Data on the chemistry of biologically active components of petroleum, synthetic fuel oils, certain metal elements and pesticides provide valuable generic information needed for predicting the long-term fate of buried waste constituents and their likelihood of entering food chains. Components of such complex mixtures partition between solid and solution phases, influencing their mobility, volatility and susceptibility to microbial transformation. Estimating health hazards from indirect exposures to organic chemicals involves an ecosystem's approach to understanding the unique behavior of complex mixtures. Metabolism by microbial organisms fundamentally alters these complex mixtures as they move through food chains. Pathway modeling of organic chemicals must consider the nature and magnitude of food chain transfers to predict biological risk where metabolites may become more toxic than the parent compound. To obtain predictions, major areas are identified where data acquisition is essential to extend our radiological modeling experience to the field of organic chemical contamination. PMID:6428875
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
De Lisi, Rosario; Milioto, Stefania; Muratore, Nicola
2009-01-01
The thermodynamics of conventional surfactants, block copolymers and their mixtures in water was described to the light of the enthalpy function. The two methodologies, i.e. the van’t Hoff approach and the isothermal calorimetry, used to determine the enthalpy of micellization of pure surfactants and block copolymers were described. The van’t Hoff method was critically discussed. The aqueous copolymer+surfactant mixtures were analyzed by means of the isothermal titration calorimetry and the enthalpy of transfer of the copolymer from the water to the aqueous surfactant solutions. Thermodynamic models were presented to show the procedure to extract straightforward molecular insights from the bulk properties. PMID:19742173
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mack, J H; Dibble, R W; Buchholz, B A
2004-01-16
Despite the rapid combustion typically experienced in Homogeneous Charge Compression Ignition (HCCI), components in fuel mixtures do not ignite in unison or burn equally. In our experiments and modeling of blends of diethyl ether (DEE) and ethanol (EtOH), the DEE led combustion and proceeded further toward completion, as indicated by {sup 14}C isotope tracing. A numerical model of HCCI combustion of DEE and EtOH mixtures supports the isotopic findings. Although both approaches lacked information on incompletely combusted intermediates plentiful in HCCI emissions, the numerical model and {sup 14}C tracing data agreed within the limitations of the single zone model. Despitemore » the fact that DEE is more reactive than EtOH in HCCI engines, they are sufficiently similar that we did not observe a large elongation of energy release or significant reduction in inlet temperature required for light-off, both desired effects for the combustion event. This finding suggests that, in general, HCCI combustion of fuel blends may have preferential combustion of some of the blend components.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.« less
gpICA: A Novel Nonlinear ICA Algorithm Using Geometric Linearization
NASA Astrophysics Data System (ADS)
Nguyen, Thang Viet; Patra, Jagdish Chandra; Emmanuel, Sabu
2006-12-01
A new geometric approach for nonlinear independent component analysis (ICA) is presented in this paper. Nonlinear environment is modeled by the popular post nonlinear (PNL) scheme. To eliminate the nonlinearity in the observed signals, a novel linearizing method named as geometric post nonlinear ICA (gpICA) is introduced. Thereafter, a basic linear ICA is applied on these linearized signals to estimate the unknown sources. The proposed method is motivated by the fact that in a multidimensional space, a nonlinear mixture is represented by a nonlinear surface while a linear mixture is represented by a plane, a special form of the surface. Therefore, by geometrically transforming the surface representing a nonlinear mixture into a plane, the mixture can be linearized. Through simulations on different data sets, superior performance of gpICA algorithm has been shown with respect to other algorithms.
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, Melissa; CESAM & Departamento de Biologia, Universidade de Aveiro, 3810-193 Aveiro; Pavlichenko, Vasiliy
Aquatic organisms, such as bivalves, employ ATP binding cassette (ABC) transporters for efflux of potentially toxic chemicals. Anthropogenic water contaminants can, as chemosensitizers, disrupt efflux transporter function enabling other, putatively toxic compounds to enter the organism. Applying rapid amplification of cDNA ends (RACE) PCR we identified complete cDNAs encoding ABCB1- and ABCC1-type transporter homologs from zebra mussel providing the molecular basis for expression of both transporter types in zebra mussel gills. Further, efflux activities of both transporter types in gills were indicated with dye accumulation assays where efflux of the dye calcein-am was sensitive to both ABCB1- (reversin 205, verapamil)more » and ABCC1- (MK571) type specific inhibitors. The assumption that different inhibitors targeted different efflux pump types was confirmed when comparing measured effects of binary inhibitor compound mixtures in dye accumulation assays with predictions from mixture effect models. Effects by the MK571/reversin 205 mixture corresponded better with independent action, whereas reversin 205/verapamil joint effects were better predicted by the concentration addition model indicating different and equal targets, respectively. The binary mixture approach was further applied to identify the efflux pump type targeted by environmentally relevant chemosensitizing compounds. Pentachlorophenol and musk ketone, which were selected after a pre-screen of twelve compounds that previously had been identified as chemosensitizers, showed mixture effects that corresponded better with concentration addition when combined with reversine 205 but with independent action predictions when combined with MK571 indicating targeting of an ABCB1-type efflux pump by these compounds. - Highlights: • Sequences and function of ABC efflux transporters in bivalve gills were explored. • Full length Dreissena polymorpha abcb1 and abcc1 cDNA sequences were identified. • A mixture effect design with inhibitors was applied in transporter activity assays. • ABCB1- and ABCC-type efflux activities were distinguished in native gill tissue. • Inhibitory action of environmental chemicals targeted ABCB1-type efflux activity.« less
A semi-parametric within-subject mixture approach to the analyses of responses and response times.
Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K
2018-05-01
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bansal, Artee; Asthagiri, D.; Cox, Kenneth R.
A mixture of solvent particles with short-range, directional interactions and solute particles with short-range, isotropic interactions that can bond multiple times is of fundamental interest in understanding liquids and colloidal mixtures. Because of multi-body correlations, predicting the structure and thermodynamics of such systems remains a challenge. Earlier Marshall and Chapman [J. Chem. Phys. 139, 104904 (2013)] developed a theory wherein association effects due to interactions multiply the partition function for clustering of particles in a reference hard-sphere system. The multi-body effects are incorporated in the clustering process, which in their work was obtained in the absence of the bulk medium.more » The bulk solvent effects were then modeled approximately within a second order perturbation approach. However, their approach is inadequate at high densities and for large association strengths. Based on the idea that the clustering of solvent in a defined coordination volume around the solute is related to occupancy statistics in that defined coordination volume, we develop an approach to incorporate the complete information about hard-sphere clustering in a bulk solvent at the density of interest. The occupancy probabilities are obtained from enhanced sampling simulations but we also develop a concise parametric form to model these probabilities using the quasichemical theory of solutions. We show that incorporating the complete reference information results in an approach that can predict the bonding state and thermodynamics of the colloidal solute for a wide range of system conditions.« less
Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET
NASA Astrophysics Data System (ADS)
Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.
2018-06-01
A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.
Wang, J C; Liu, W C; Chatzisarantis, N L; Lim, C B
2010-06-01
The purpose of the current study was to examine the influence of perceived motivational climate on achievement goals in physical education using a structural equation mixture modeling (SEMM) analysis. Within one analysis, we identified groups of students with homogenous profiles in perceptions of motivational climate and examined the relationships between motivational climate, 2 x 2 achievement goals, and affect, concurrently. The findings of the current study showed that there were at least two distinct groups of students with differing perceptions of motivational climate: one group of students had much higher perceptions in both climates compared with the other group. Regardless of their grouping, the relationships between motivational climate, achievement goals, and enjoyment seemed to be invariant. Mastery climate predicted the adoption of mastery-approach and mastery-avoidance goals; performance climate was related to performance-approach and performance-avoidance goals. Mastery-approach goal had a strong positive effect while performance-avoidance had a small negative effect on enjoyment. Overall, it was concluded that only perception of a mastery motivational climate in physical education may foster intrinsic interest in physical education through adoption of mastery-approach goals.
CFD Modeling of Helium Pressurant Effects on Cryogenic Tank Pressure Rise Rates in Normal Gravity
NASA Technical Reports Server (NTRS)
Grayson, Gary; Lopez, Alfredo; Chandler, Frank; Hastings, Leon; Hedayat, Ali; Brethour, James
2007-01-01
A recently developed computational fluid dynamics modeling capability for cryogenic tanks is used to simulate both self-pressurization from external heating and also depressurization from thermodynamic vent operation. Axisymmetric models using a modified version of the commercially available FLOW-3D software are used to simulate actual physical tests. The models assume an incompressible liquid phase with density that is a function of temperature only. A fully compressible formulation is used for the ullage gas mixture that contains both condensable vapor and a noncondensable gas component. The tests, conducted at the NASA Marshall Space Flight Center, include both liquid hydrogen and nitrogen in tanks with ullage gas mixtures of each liquid's vapor and helium. Pressure and temperature predictions from the model are compared to sensor measurements from the tests and a good agreement is achieved. This further establishes the accuracy of the developed FLOW-3D based modeling approach for cryogenic systems.
Kellogg, Joshua J; Todd, Daniel A; Egan, Joseph M; Raja, Huzefa A; Oberlies, Nicholas H; Kvalheim, Olav M; Cech, Nadja B
2016-02-26
A central challenge of natural products research is assigning bioactive compounds from complex mixtures. The gold standard approach to address this challenge, bioassay-guided fractionation, is often biased toward abundant, rather than bioactive, mixture components. This study evaluated the combination of bioassay-guided fractionation with untargeted metabolite profiling to improve active component identification early in the fractionation process. Key to this methodology was statistical modeling of the integrated biological and chemical data sets (biochemometric analysis). Three data analysis approaches for biochemometric analysis were compared, namely, partial least-squares loading vectors, S-plots, and the selectivity ratio. Extracts from the endophytic fungi Alternaria sp. and Pyrenochaeta sp. with antimicrobial activity against Staphylococcus aureus served as test cases. Biochemometric analysis incorporating the selectivity ratio performed best in identifying bioactive ions from these extracts early in the fractionation process, yielding altersetin (3, MIC 0.23 μg/mL) and macrosphelide A (4, MIC 75 μg/mL) as antibacterial constituents from Alternaria sp. and Pyrenochaeta sp., respectively. This study demonstrates the potential of biochemometrics coupled with bioassay-guided fractionation to identify bioactive mixture components. A benefit of this approach is the ability to integrate multiple stages of fractionation and bioassay data into a single analysis.
Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.
de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos
2011-01-01
In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat
2012-09-01
Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Bayesian mixture model for missing data in marine mammal growth analysis
Shotwell, Mary E.; McFee, Wayne E.; Slate, Elizabeth H.
2016-01-01
Much of what is known about bottle nose dolphin (Tursiops truncatus) anatomy and physiology is based on necropsies from stranding events. Measurements of total body length, total body mass, and age are used to estimate growth. It is more feasible to retrieve and transport smaller animals for total body mass measurement than larger animals, introducing a systematic bias in sampling. Adverse weather events, volunteer availability, and other unforeseen circumstances also contribute to incomplete measurement. We have developed a Bayesian mixture model to describe growth in detected stranded animals using data from both those that are fully measured and those not fully measured. Our approach uses a shared random effect to link the missingness mechanism (i.e. full/partial measurement) to distinct growth curves in the fully and partially measured populations, thereby enabling drawing of strength for estimation. We use simulation to compare our model to complete case analysis and two common multiple imputation methods according to model mean square error. Results indicate that our mixture model provides better fit both when the two populations are present and when they are not. The feasibility and utility of our new method is demonstrated by application to South Carolina strandings data. PMID:28503080
Transport Properties for Combustion Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, N.J.; Bastein, L.; Price, P.N.
This review examines current approximations and approaches that underlie the evaluation of transport properties for combustion modeling applications. Discussed in the review are: the intermolecular potential and its descriptive molecular parameters; various approaches to evaluating collision integrals; supporting data required for the evaluation of transport properties; commonly used computer programs for predicting transport properties; the quality of experimental measurements and their importance for validating or rejecting approximations to property estimation; the interpretation of corresponding states; combination rules that yield pair molecular potential parameters for unlike species from like species parameters; and mixture approximations. The insensitivity of transport properties to intermolecularmore » forces is noted, especially the non-uniqueness of the supporting potential parameters. Viscosity experiments of pure substances and binary mixtures measured post 1970 are used to evaluate a number of approximations; the intermediate temperature range 1 < T* < 10, where T* is kT/{var_epsilon}, is emphasized since this is where rich data sets are available. When suitable potential parameters are used, errors in transport property predictions for pure substances and binary mixtures are less than 5 %, when they are calculated using the approaches of Kee et al.; Mason, Kestin, and Uribe; Paul and Warnatz; or Ern and Giovangigli. Recommendations stemming from the review include (1) revisiting the supporting data required by the various computational approaches, and updating the data sets with accurate potential parameters, dipole moments, and polarizabilities; (2) characterizing the range of parameter space over which the fit to experimental data is good, rather than the current practice of reporting only the parameter set that best fits the data; (3) looking for improved combining rules, since existing rules were found to under-predict the viscosity in most cases; (4) performing more transport property measurements for mixtures that include radical species, an important but neglected area; (5) using the TRANLIB approach for treating polar molecules and (6) performing more accurate measurements of the molecular parameters used to evaluate the molecular heat capacity, since it affects thermal conductivity, which is important in predicting flame development.« less
Investigation of Profiles of Risk Factors for Adolescent Psychopathology: A Person-Centered Approach
ERIC Educational Resources Information Center
Parra, Gilbert R.; DuBois, David L.; Sher, Kenneth J.
2006-01-01
Latent variable mixture modeling was used to identify subgroups of adolescents with distinct profiles of risk factors from individual, family, peer, and broader contextual domains. Data were drawn from the National Longitudinal Study of Adolescent Health. Four-class models provided the most theoretically meaningful solutions for both 7th (n = 907;…
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
A Four-step Approach for Evaluation of Dose Additivity
A four step approach was developed for evaluating toxicity data on a chemical mixture for consistency with dose addition. Following the concepts in the U.S. EPA mixture guidance (EPA 2000), toxicologic interaction for a defined mixture (all components known) is departure from a c...
NASA Astrophysics Data System (ADS)
Pizio, O.; Sokołowski, S.; Sokołowska, Z.
2014-05-01
We investigate microscopic structure, adsorption, and electric properties of a mixture that consists of amphiphilic molecules and charged hard spheres in contact with uncharged or charged solid surfaces. The amphiphilic molecules are modeled as spheres composed of attractive and repulsive parts. The electrolyte component of the mixture is considered in the framework of the restricted primitive model (RPM). The system is studied using a density functional theory that combines fundamental measure theory for hard sphere mixtures, weighted density approach for inhomogeneous charged hard spheres, and a mean-field approximation to describe anisotropic interactions. Our principal focus is in exploring the effects brought by the presence of ions on the distribution of amphiphilic particles at the wall, as well as the effects of amphiphilic molecules on the electric double layer formed at solid surface. In particular, we have found that under certain thermodynamic conditions a long-range translational and orientational order can develop. The presence of amphiphiles produces changes of the shape of the differential capacitance from symmetric or non-symmetric bell-like to camel-like. Moreover, for some systems the value of the potential of the zero charge is non-zero, in contrast to the RPM at a charged surface.
Application of Biologically-Based Lumping To Investigate the ...
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Risk assessment of occupational exposure to heavy metal mixtures: a study protocol.
Omrane, Fatma; Gargouri, Imed; Khadhraoui, Moncef; Elleuch, Boubaker; Zmirou-Navier, Denis
2018-03-05
Sfax is a very industrialized city located in the southern region of Tunisia where heavy metals (HMs) pollution is now an established matter of fact. The health of its residents mainly those engaged in industrial metals-based activities is under threat. Indeed, such workers are being exposed to a variety of HMs mixtures, and this exposure has cumulative properties. Whereas current HMs exposure assessment is mainly carried out using direct air monitoring approaches, the present study aims to assess health risks associated with chronic occupational exposure to HMs in industry, using a modeling approach that will be validated later on. To this end, two questionnaires were used. The first was an identification/descriptive questionnaire aimed at identifying, for each company: the specific activities, materials used, manufactured products and number of employees exposed. The second related to the job-task of the exposed persons, workplace characteristics (dimensions, ventilation, etc.), type of metals and emission configuration in space and time. Indoor air HMs concentrations were predicted, based on the mathematical models generally used to estimate occupational exposure to volatile substances (such as solvents). Later on, and in order to validate the adopted model, air monitoring will be carried out, as well as some biological monitoring aimed at assessing HMs excretion in the urine of workers volunteering to participate. Lastly, an interaction-based hazard index HI int and a decision support tool will be used to predict the cumulative risk assessment for HMs mixtures. One hundred sixty-one persons working in the 5 participating companies have been identified. Of these, 110 are directly engaged with HMs in the course of the manufacturing process. This model-based prediction of occupational exposure represents an alternative tool that is both time-saving and cost-effective in comparison with direct air monitoring approaches. Following validation of the different models according to job processes, via comparison with direct measurements and exploration of correlations with biological monitoring, these estimates will allow a cumulative risk characterization.
A Just-in-Time Learning based Monitoring and Classification Method for Hyper/Hypocalcemia Diagnosis.
Peng, Xin; Tang, Yang; He, Wangli; Du, Wenli; Qian, Feng
2017-01-20
This study focuses on the classification and pathological status monitoring of hyper/hypo-calcemia in the calcium regulatory system. By utilizing the Independent Component Analysis (ICA) mixture model, samples from healthy patients are collected, diagnosed, and subsequently classified according to their underlying behaviors, characteristics, and mechanisms. Then, a Just-in-Time Learning (JITL) has been employed in order to estimate the diseased status dynamically. In terms of JITL, for the purpose of the construction of an appropriate similarity index to identify relevant datasets, a novel similarity index based on the ICA mixture model is proposed in this paper to improve online model quality. The validity and effectiveness of the proposed approach have been demonstrated by applying it to the calcium regulatory system under various hypocalcemic and hypercalcemic diseased conditions.
Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya
2018-04-27
Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.
USDA-ARS?s Scientific Manuscript database
A methodology is presented to characterize complex protein assembly pathways by fluorescence correlation spectroscopy. We have derived the total autocorrelation function describing the behavior of mixtures of labeled and unlabeled protein under equilibrium conditions. Our modeling approach allows us...
MODELING A MIXTURE: PBPK/PD APPROACHES FOR PREDICTING CHEMICAL INTERACTIONS.
Since environmental chemical exposures generally involve multiple chemicals, there are both regulatory and scientific drivers to develop methods to predict outcomes of these exposures. Even using efficient statistical and experimental designs, it is not possible to test in vivo a...
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
High-Throughput Analysis of Ovarian Cycle Disruption by Mixtures of Aromatase Inhibitors
Golbamaki-Bakhtyari, Nazanin; Kovarich, Simona; Tebby, Cleo; Gabb, Henry A.; Lemazurier, Emmanuel
2017-01-01
Background: Combining computational toxicology with ExpoCast exposure estimates and ToxCast™ assay data gives us access to predictions of human health risks stemming from exposures to chemical mixtures. Objectives: We explored, through mathematical modeling and simulations, the size of potential effects of random mixtures of aromatase inhibitors on the dynamics of women's menstrual cycles. Methods: We simulated random exposures to millions of potential mixtures of 86 aromatase inhibitors. A pharmacokinetic model of intake and disposition of the chemicals predicted their internal concentration as a function of time (up to 2 y). A ToxCast™ aromatase assay provided concentration–inhibition relationships for each chemical. The resulting total aromatase inhibition was input to a mathematical model of the hormonal hypothalamus–pituitary–ovarian control of ovulation in women. Results: Above 10% inhibition of estradiol synthesis by aromatase inhibitors, noticeable (eventually reversible) effects on ovulation were predicted. Exposures to individual chemicals never led to such effects. In our best estimate, ∼10% of the combined exposures simulated had mild to catastrophic impacts on ovulation. A lower bound on that figure, obtained using an optimistic exposure scenario, was 0.3%. Conclusions: These results demonstrate the possibility to predict large-scale mixture effects for endocrine disrupters with a predictive toxicology approach that is suitable for high-throughput ranking and risk assessment. The size of the effects predicted is consistent with an increased risk of infertility in women from everyday exposures to our chemical environment. https://doi.org/10.1289/EHP742 PMID:28886606
Haxhimali, Tomorr; Rudd, Robert E.; Cabot, William H.; ...
2015-11-24
We present molecular dynamics (MD) calculations of shear viscosity for asymmetric mixed plasma for thermodynamic conditions relevant to astrophysical and inertial confinement fusion plasmas. Specifically, we consider mixtures of deuterium and argon at temperatures of 100–500 eV and a number density of 10 25 ions/cc. The motion of 30 000–120 000 ions is simulated in which the ions interact via the Yukawa (screened Coulomb) potential. The electric field of the electrons is included in this effective interaction; the electrons are not simulated explicitly. Shear viscosity is calculated using the Green-Kubo approach with an integral of the shear stress autocorrelation function,more » a quantity calculated in the equilibrium MD simulations. We systematically study different mixtures through a series of simulations with increasing fraction of the minority high- Z element (Ar) in the D-Ar plasma mixture. In the more weakly coupled plasmas, at 500 eV and low Ar fractions, results from MD compare very well with Chapman-Enskog kinetic results. In the more strongly coupled plasmas, the kinetic theory does not agree well with the MD results. Here, we develop a simple model that interpolates between classical kinetic theories at weak coupling and the Murillo Yukawa viscosity model at higher coupling. Finally, this hybrid kinetics-MD viscosity model agrees well with the MD results over the conditions simulated, ranging from moderately weakly coupled to moderately strongly coupled asymmetric plasma mixtures.« less
Dose addition is the most frequently-used component-based approach for predicting dose response for a mixture of toxicologically-similar chemicals and for statistical evaluation of whether the mixture response is consistent with dose additivity and therefore predictable from the ...
NASA Astrophysics Data System (ADS)
Xie, M.; Agus, S. S.; Schanz, T.; Kolditz, O.
2004-12-01
This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright
Gainer, Amy; Cousins, Mark; Hogan, Natacha; Siciliano, Steven D
2018-05-05
Although petroleum hydrocarbons (PHCs) released to the environment typically occur as mixtures, PHC remediation guidelines often reflect individual substance toxicity. It is well documented that groups of aliphatic PHCs act via the same mechanism of action, nonpolar narcosis and, theoretically, concentration addition mixture toxicity principles apply. To assess this theory, ten standardized acute and chronic soil invertebrate toxicity tests on a range of organisms (Eisenia fetida, Lumbricus terrestris, Enchytraeus crypticus, Folsomia candida, Oppia nitens and Hypoaspis aculeifer) were conducted with a refined PHC binary mixture. Reference models for concentration addition and independent action were applied to the mixture toxicity data with consideration of synergism, antagonism and dose level toxicity. Both concentration addition and independent action, without further interactions, provided the best fit with observed response to the mixture. Individual fraction effective concentration values were predicted from optimized, fitted reference models. Concentration addition provided a better estimate than independent action of individual fraction effective concentrations based on comparison with available literature and species trends observed in toxic responses to the mixture. Interspecies differences in standardized laboratory soil invertebrate species responses to PHC contaminated soil was reflected in unique traits. Diets that included soil, large body size, permeable cuticle, low lipid content, lack of ability to molt and no maternal transfer were traits linked to a sensitive survival response to PHC contaminated soil in laboratory tests. Traits linked to sensitive reproduction response in organisms tested were long life spans with small clutch sizes. By deriving single fraction toxicity endpoints considerate of mixtures, we reduce resources and time required in conducting site specific risk assessments for the protection of soil organism's exposure pathway. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Cubarsi, R; Carrió, M M; Villaverde, A
2005-09-01
The in vivo proteolytic digestion of bacterial inclusion bodies (IBs) and the kinetic analysis of the resulting protein fragments is an interesting approach to investigate the molecular organization of these unconventional protein aggregates. In this work, we describe a set of mathematical instruments useful for such analysis and interpretation of observed data. These methods combine numerical estimation of digestion rate and approximation of its high-order derivatives, modelling of fragmentation events from a mixture of Poisson processes associated with differentiated protein species, differential equations techniques in order to estimate the mixture parameters, an iterative predictor-corrector algorithm for describing the flow diagram along the cascade process, as well as least squares procedures with minimum variance estimates. The models are formulated and compared with data, and successively refined to better match experimental observations. By applying such procedures as well as newer improved algorithms of formerly developed equations, it has been possible to model, for two kinds of bacterially produced aggregation prone recombinant proteins, their cascade digestion process that has revealed intriguing features of the IB-forming polypeptides.
A hydrodynamic model for granular material flows including segregation effects
NASA Astrophysics Data System (ADS)
Gilberg, Dominik; Klar, Axel; Steiner, Konrad
2017-06-01
The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.
2015-01-01
The present study investigated the possibilities and limitations of implementing a genome-wide transcription-based approach that takes into account genetic and environmental variation to better understand the response of natural populations to stressors. When exposing two different Daphnia pulex genotypes (a cadmium-sensitive and a cadmium-tolerant one) to cadmium, the toxic cyanobacteria Microcystis aeruginosa, and their mixture, we found that observations at the transcriptomic level do not always explain observations at a higher level (growth, reproduction). For example, although cadmium elicited an adverse effect at the organismal level, almost no genes were differentially expressed after cadmium exposure. In addition, we identified oxidative stress and polyunsaturated fatty acid metabolism-related pathways, as well as trypsin and neurexin IV gene-families as candidates for the underlying causes of genotypic differences in tolerance to Microcystis. Furthermore, the whole-genome transcriptomic data of a stressor mixture allowed a better understanding of mixture responses by evaluating interactions between two stressors at the gene-expression level against the independent action baseline model. This approach has indicated that ubiquinone pathway and the MAPK serine-threonine protein kinase and collagens gene-families were enriched with genes showing an interactive effect in expression response to exposure to the mixture of the stressors, while transcription and translation-related pathways and gene-families were mostly related with genotypic differences in interactive responses to this mixture. Collectively, our results indicate that the methods we employed may improve further characterization of the possibilities and limitations of transcriptomics approaches in the adverse outcome pathway framework and in predictions of multistressor effects on natural populations. PMID:24552364
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.
de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard
2018-03-01
Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw
2014-02-15
We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Dayan, Michael; Hurtado Rúa, Sandra M.; Monohan, Elizabeth; Fujimoto, Kyoko; Pandya, Sneha; LoCastro, Eve M.; Vartanian, Tim; Nguyen, Thanh D.; Raj, Ashish; Gauthier, Susan A.
2017-01-01
A novel lesion-mask free method based on a gamma mixture model was applied to myelin water fraction (MWF) maps to estimate the association between cortical thickness and myelin content, and how it differs between relapsing-remitting (RRMS) and secondary-progressive multiple sclerosis (SPMS) groups (135 and 23 patients, respectively). It was compared to an approach based on lesion masks. The gamma mixture distribution of whole brain, white matter (WM) MWF was characterized with three variables: the mode (most frequent value) m1 of the gamma component shown to relate to lesion, the mode m2 of the component shown to be associated with normal appearing (NA) WM, and the mixing ratio (λ) between the two distributions. The lesion-mask approach relied on the mean MWF within lesion and within NAWM. A multivariate regression analysis was carried out to find the best predictors of cortical thickness for each group and for each approach. The gamma-mixture method was shown to outperform the lesion-mask approach in terms of adjusted R2, both for the RRMS and SPMS groups. The predictors of the final gamma-mixture models were found to be m1 (β = 1.56, p < 0.005), λ (β = −0.30, p < 0.0005) and age (β = −0.0031, p < 0.005) for the RRMS group (adjusted R2 = 0.16), and m2 (β = 4.72, p < 0.0005) for the SPMS group (adjusted R2 = 0.45). Further, a DICE coefficient analysis demonstrated that the lesion mask had more overlap to an ROI associated with m1, than to an ROI associated with m2 (p < 0.00001), and vice versa for the NAWM mask (p < 0.00001). These results suggest that during the relapsing phase, focal WM damage is associated with cortical thinning, yet in SPMS patients, global WM deterioration has a much stronger influence on secondary degeneration. Through these findings, we demonstrate the potential contribution of myelin loss on neuronal degeneration at different disease stages and the usefulness of our statistical reduction technique which is not affected by the typical bias associated with approaches based on lesion masks. PMID:28603479
Schmiege, Sarah J; Bryan, Angela D
2016-04-01
Justice-involved adolescents engage in high levels of risky sexual behavior and substance use, and understanding potential relationships among these constructs is important for effective HIV/STI prevention. A regression mixture modeling approach was used to determine whether subgroups could be identified based on the regression of two indicators of sexual risk (condom use and frequency of intercourse) on three measures of substance use (alcohol, marijuana and hard drugs). Three classes were observed among n = 596 adolescents on probation: none of the substances predicted outcomes for approximately 18 % of the sample; alcohol and marijuana use were predictive for approximately 59 % of the sample, and marijuana use and hard drug use were predictive in approximately 23 % of the sample. Demographic, individual difference, and additional sexual and substance use risk variables were examined in relation to class membership. Findings are discussed in terms of understanding profiles of risk behavior among at-risk youth.
Additive effects in high-voltage layered-oxide cells: A statistics of mixtures approach
Sahore, Ritu; Peebles, Cameron; Abraham, Daniel P.; ...
2017-07-20
Li 1.03(Ni 0.5Mn 0.3Co 0.2) 0.97O 2 (NMC)-based coin cells containing the electrolyte additives vinylene carbonate (VC) and tris(trimethylsilyl)phosphite (TMSPi) in the range of 0-2 wt% were cycled between 3.0 and 4.4 V. The changes in capacity at rates of C/10 and C/1 and resistance at 60% state of charge were found to follow linear-with-time kinetic rate laws. Further, the C/10 capacity and resistance data were amenable to modeling by a statistics of mixtures approach. Applying physical meaning to the terms in the empirical models indicated that the interactions between the electrolyte and additives were not simple. For example, theremore » were strong, synergistic interactions between VC and TMSPi affecting C/10 capacity loss, as expected, but there were other, more subtle interactions between the electrolyte components. In conclusion, the interactions between these components controlled the C/10 capacity decline and resistance increase.« less
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
Determining of migraine prognosis using latent growth mixture models.
Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan
2011-04-01
This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Lubrication model for evaporation of binary sessile drops
NASA Astrophysics Data System (ADS)
Williams, Adam; Sáenz, Pedro; Karapetsas, George; Matar, Omar; Sefiane, Khellil; Valluri, Prashant
2017-11-01
Evaporation of a binary mixture sessile drop from a solid substrate is a highly dynamic and complex process with flow driven both thermal and solutal Marangoni stresses. Experiments on ethanol/water drops have identified chaotic regimes on both the surface and interior of the droplet, while mixture composition has also been seen to govern drop wettability. Using a lubrication-type approach, we present a finite element model for the evaporation of an axisymmetric binary drop deposited on a heated substrate. We consider a thin drop with a moving contact line, taking also into account the commonly ignored effects of inertia which drives interfacial instability. We derive evolution equations for the film height, the temperature and the concentration field considering that the mixture comprises two ideally mixed volatile components with a surface tension linearly dependent on both temperature and concentration. The properties of the mixture such as viscosity also vary locally with concentration. We explore the parameter space to examine the resultant effects on wetting and evaporation where we find qualitative agreement with experiments in both these areas. This enables us to understand the nature of the instabilities that spontaneously emerge over the drop lifetime. EPSRC - EP/K00963X/1.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Mixture modeling of multi-component data sets with application to ion-probe zircon ages
NASA Astrophysics Data System (ADS)
Sambridge, M. S.; Compston, W.
1994-12-01
A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.
Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations.
Vorobev, Anatoliy
2010-11-01
We use the Cahn-Hilliard approach to model the slow dissolution dynamics of binary mixtures. An important peculiarity of the Cahn-Hilliard-Navier-Stokes equations is the necessity to use the full continuity equation even for a binary mixture of two incompressible liquids due to dependence of mixture density on concentration. The quasicompressibility of the governing equations brings a short time-scale (quasiacoustic) process that may not affect the slow dynamics but may significantly complicate the numerical treatment. Using the multiple-scale method we separate the physical processes occurring on different time scales and, ultimately, derive the equations with the filtered-out quasiacoustics. The derived equations represent the Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations. This approximation can be further employed as a universal theoretical model for an analysis of slow thermodynamic and hydrodynamic evolution of the multiphase systems with strongly evolving and diffusing interfacial boundaries, i.e., for the processes involving dissolution/nucleation, evaporation/condensation, solidification/melting, polymerization, etc.
Hallquist, Michael N; Wright, Aidan G C
2014-01-01
Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.
Wang, Yunpeng; Thompson, Wesley K.; Schork, Andrew J.; Holland, Dominic; Chen, Chi-Hua; Bettella, Francesco; Desikan, Rahul S.; Li, Wen; Witoelar, Aree; Zuber, Verena; Devor, Anna; Nöthen, Markus M.; Rietschel, Marcella; Chen, Qiang; Werge, Thomas; Cichon, Sven; Weinberger, Daniel R.; Djurovic, Srdjan; O’Donovan, Michael; Visscher, Peter M.; Andreassen, Ole A.; Dale, Anders M.
2016-01-01
Most of the genetic architecture of schizophrenia (SCZ) has not yet been identified. Here, we apply a novel statistical algorithm called Covariate-Modulated Mixture Modeling (CM3), which incorporates auxiliary information (heterozygosity, total linkage disequilibrium, genomic annotations, pleiotropy) for each single nucleotide polymorphism (SNP) to enable more accurate estimation of replication probabilities, conditional on the observed test statistic (“z-score”) of the SNP. We use a multiple logistic regression on z-scores to combine information from auxiliary information to derive a “relative enrichment score” for each SNP. For each stratum of these relative enrichment scores, we obtain nonparametric estimates of posterior expected test statistics and replication probabilities as a function of discovery z-scores, using a resampling-based approach that repeatedly and randomly partitions meta-analysis sub-studies into training and replication samples. We fit a scale mixture of two Gaussians model to each stratum, obtaining parameter estimates that minimize the sum of squared differences of the scale-mixture model with the stratified nonparametric estimates. We apply this approach to the recent genome-wide association study (GWAS) of SCZ (n = 82,315), obtaining a good fit between the model-based and observed effect sizes and replication probabilities. We observed that SNPs with low enrichment scores replicate with a lower probability than SNPs with high enrichment scores even when both they are genome-wide significant (p < 5x10-8). There were 693 and 219 independent loci with model-based replication rates ≥80% and ≥90%, respectively. Compared to analyses not incorporating relative enrichment scores, CM3 increased out-of-sample yield for SNPs that replicate at a given rate. This demonstrates that replication probabilities can be more accurately estimated using prior enrichment information with CM3. PMID:26808560
A discrete mathematical model applied to genetic regulation and metabolic networks.
Asenjo, A J; Ramirez, P; Rapaport, I; Aracena, J; Goles, E; Andrews, B A
2007-03-01
This paper describes the use of a discrete mathematical model to represent the basic mechanisms of regulation of the bacteria E. coli in batch fermentation. The specific phenomena studied were the changes in metabolism and genetic regulation when the bacteria use three different carbon substrates (glucose, glycerol, and acetate). The model correctly predicts the behavior of E. coli vis-à-vis substrate mixtures. In a mixture of glucose, glycerol, and acetate, it prefers glucose, then glycerol, and finally acetate. The model included 67 nodes; 28 were genes, 20 enzymes, and 19 regulators/biochemical compounds. The model represents both the genetic regulation and metabolic networks in an inrtegrated form, which is how they function biologically. This is one of the first attempts to include both of these networks in one model. Previously, discrete mathematical models were used only to describe genetic regulation networks. The study of the network dynamics generated 8 (2(3)) fixed points, one for each nutrient configuration (substrate mixture) in the medium. The fixed points of the discrete model reflect the phenotypes described. Gene expression and the patterns of the metabolic fluxes generated are described accurately. The activation of the gene regulation network depends basically on the presence of glucose and glycerol. The model predicts the behavior when mixed carbon sources are utilized as well as when there is no carbon source present. Fictitious jokers (Joker1, Joker2, and Repressor SdhC) had to be created to control 12 genes whose regulation mechanism is unknown, since glycerol and glucose do not act directly on the genes. The approach presented in this paper is particularly useful to investigate potential unknown gene regulation mechanisms; such a novel approach can also be used to describe other gene regulation situations such as the comparison between non-recombinant and recombinant yeast strain, producing recombinant proteins, presently under investigation in our group.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Generation of a mixture model ground-motion prediction equation for Northern Chile
NASA Astrophysics Data System (ADS)
Haendel, A.; Kuehn, N. M.; Scherbaum, F.
2012-12-01
In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.
Bioethanol production optimization: a thermodynamic analysis.
Alvarez, Víctor H; Rivera, Elmer Ccopa; Costa, Aline C; Filho, Rubens Maciel; Wolf Maciel, Maria Regina; Aznar, Martín
2008-03-01
In this work, the phase equilibrium of binary mixtures for bioethanol production by continuous extractive process was studied. The process is composed of four interlinked units: fermentor, centrifuge, cell treatment unit, and flash vessel (ethanol-congener separation unit). A proposal for modeling the vapor-liquid equilibrium in binary mixtures found in the flash vessel has been considered. This approach uses the Predictive Soave-Redlich-Kwong equation of state, with original and modified molecular parameters. The congeners considered were acetic acid, acetaldehyde, furfural, methanol, and 1-pentanol. The results show that the introduction of new molecular parameters r and q in the UNIFAC model gives more accurate predictions for the concentration of the congener in the gas phase for binary and ternary systems.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Isopycnic Phases and Structures in H2O/CO2/Ethoxylated Alcohol Surfactant Mixtures
NASA Technical Reports Server (NTRS)
Paulaitis, Michael E.; Zielinski, Richard G.; Kaler, Eric W.
1996-01-01
Ternary mixtures of H2O and CO2 with ethoxylated alcohol (C(i)E(j)) surfactants can form three coexisting liquid phases at conditions where two of the phases have the same density (isopycnic phases). Isopycnic phase behavior has been observed for mixtures containing the surfactants C8E5, C10E6, and C12E6, but not for those mixtures containing either C4E1 or CgE3. Pressure-temperature (PT) projections for this isopycnic three-phase equilibrium were determined for H2O/CO2/C8E5 and H2O/CO2/C10E6 mixtures at temperatures from approximately 25 to 33 C and pressures between 90 and 350 bar. As a preliminary to measuring the microstructure in isopycnic three component mixtures, phase behavior and small angle neutron scattering (SANS) experiments were performed on mixtures of D2O/CO2/ n-hexaethyleneglycol monododecyl ether (C12E6) as a function of temperature (25-31 C), pressure (63.1-90.7 bar), and CO2 composition (0-3.9 wt%). Parameters extracted from model fits of the SANS spectra indicate that, while micellar structure remains essentially unchanged, critical concentration fluctuations increase as the phase boundary and plait point are approached.
Bayesian inference for psychology, part IV: parameter estimation and Bayes factors.
Rouder, Jeffrey N; Haaf, Julia M; Vandekerckhove, Joachim
2018-02-01
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.
Sediment unmixing using detrital geochronology
NASA Astrophysics Data System (ADS)
Sharman, Glenn R.; Johnstone, Samuel A.
2017-11-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scale Reliability Evaluation with Heterogeneous Populations
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
Vidal, T; Gigot, C; de Vallavieille-Pope, C; Huber, L; Saint-Jean, S
2018-06-08
Growing cultivars differing by their disease resistance level together (cultivar mixtures) can reduce the propagation of diseases. Although architectural characteristics of cultivars are little considered in mixture design, they could have an effect on disease, in particular through spore dispersal by rain splash, which occurs over short distances. The objective of this work was to assess the impact of plant height of wheat cultivars in mixtures on splash dispersal of Zymoseptoria tritici, which causes septoria tritici leaf blotch. We used a modelling approach involving an explicit description of canopy architecture and splash dispersal processes. The dispersal model computed raindrop interception by a virtual canopy as well as the production, transport and interception of splash droplets carrying inoculum. We designed 3-D virtual canopies composed of susceptible and resistant plants, according to field measurements at the flowering stage. In numerical experiments, we tested different heights of virtual cultivars making up binary mixtures to assess the influence of this architectural trait on dispersal patterns of spore-carrying droplets. Inoculum interception decreased exponentially with the height relative to the main inoculum source (lower diseased leaves of susceptible plants), and little inoculum was intercepted further than 40 cm above the inoculum source. Consequently, tall plants intercepted less inoculum than smaller ones. Plants with twice the standard height intercepted 33 % less inoculum than standard height plants. In cases when the height of suscpeptible plants was doubled, inoculum interception by resistant leaves was 40 % higher. This physical barrier to spore-carrying droplet trajectories reduced inoculum interception by tall susceptible plants and was modulated by plant height differences between cultivars of a binary mixture. These results suggest that mixture effects on spore dispersal could be modulated by an adequate choice of architectural characteristics of cultivars. In particular, even small differences in plant height could reduce spore dispersal.
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
Continuum approaches for describing solid-gas and solid-liquid flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diamond, P.; Harvey, J.; Levine, H.
Two-phase continuum models have been used to describe the multiphase flow properties of solid-gas and solid-liquid mixtures. The approach is limited in that it requires many fitting functions and parameters to be determined empirically, and it does not provide natural explanations for some of the qualitative behavior of solid-fluid flow. In this report, we explore a more recent single-phase continuum model proposed by Jenkins and Savage to describe granular flow. Jenkins and McTigue have proposed a modified model to describe the flow of dense suspensions, and hence, many of our results can be straight-forwardly extended to this flow regime asmore » well. The solid-fluid mixture is treated as a homogeneous, compressible fluid in which the particle fluctuations about the mean flow are described in terms of an effective temperature. The particle collisions are treated as inelastic. After an introduction in which we briefly comment on the present status of the field, we describe the details of the single-phase continuum model and analyze the microscopic and macroscopic flow conditions required for the approach to be valid. We then derive numerous qualitative predictions which can be empirically verified in small-scale experiments: The flow profiles are computed for simple boundary conditions, plane Couette flow and channel flow. Segregaion effects when there are two (or more) particle size are considered. The acoustic dispersion relation is derived and shown to predict that granular flow is supersonic. We point out that the analysis of flow instabilities is complicated by the finite compressibility of the solid-fluid mixture. For example, the large compressibility leads to interchange (Rayleigh-Taylor instabilities) in addition to the usual angular momentum interchange in standard (cylindrical) Couette flow. We conclude by describing some of the advantages and limitations of experimental techniques that might be used to test predictions for solid-fluid flow. 19 refs.« less
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Theoretical Thermodynamics of Mixtures at High Pressures
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1985-01-01
The development of an understanding of the chemistry of mixtures of metallic hydrogen and abundant, higher-z material such as oxygen, carbon, etc., is important for understanding of fundamental processes of energy release, differentiation, and development of atmospheric abundances in the Jovian planets. It provides a significant theoretical base for the interpretation of atmospheric elemental abundances to be provided by atmospheric entry probes in coming years. Significant differences are found when non-perturbative approaches such as Thomas-Fermi-Dirac (TFD) theory are used. Mapping of the phase diagrams of such binary mixtures in the pressure range from approx. 10 Mbar to approx. 1000 Mbar, using results from three-dimensional TFD calculations is undertaken. Derivation of a general and flexible thermodynamic model for such binary mixtures in the relevant pressure range was facilitated by the following breakthrough: there exists an accurate nd fairly simple thermodynamic representation of a liquid two-component plasma (TCP) in which the Helmholtz free energy is represented as a suitable linear combination of terms dependent only on density and terms which depend only on the ion coupling parameter. It is found that the crystal energies of mixtures of H-He, H-C, and H-O can be satisfactorily reproduced by the same type of model, except that an effective, density-dependent ionic charge must be used in place of the actual total ionic charge.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Efficient implicit LES method for the simulation of turbulent cavitating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan
2016-07-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less
Johnson, B. Thomas
1989-01-01
Traditional single species toxicity tests and multiple component laboratory-scaled microcosm assays were combined to assess the toxicological hazard of diesel oil, a model complex mixture, to a model aquatic environment. The immediate impact of diesel oil dosed on a freshwater community was studied in a model pond microcosm over 14 days: a 7-day dosage and a 7-day recovery period. A multicomponent laboratory microcosm was designed to monitor the biological effects of diesel oil (1·0 mg litre−1) on four components: water, sediment (soil + microbiota), plants (aquatic macrophytes and algae), and animals (zooplanktonic and zoobenthic invertebrates). To determine the sensitivity of each part of the community to diesel oil contamination and how this model community recovered when the oil dissipated, limnological, toxicological, and microbiological variables were considered. Our model revealed these significant occurrences during the spill period: first, a community production and respiration perturbation, characterized in the water column by a decrease in dissolved oxygen and redox potential and a concomitant increase in alkalinity and conductivity; second, marked changes in microbiota of sediments that included bacterial heterotrophic dominance and a high heterotrophic index (0·6), increased bacterial productivity, and the marked increases in numbers of saprophytic bacteria (10 x) and bacterial oil degraders (1000 x); and third, column water acutely toxic (100% mortality) to two model taxa: Selenastrum capricornutum and Daphnia magna. Following the simulated clean-up procedure to remove the oil slick, the recovery period of this freshwater microcosm was characterized by a return to control values. This experimental design emphasized monitoring toxicological responses in aquatic microcosm; hence, we proposed the term ‘toxicosm’ to describe this approach to aquatic toxicological hazard evaluation. The toxicosm as a valuable toxicological tool for screening aquatic contaminants was demonstrated using diesel oil as a model complex mixture.
NASA Astrophysics Data System (ADS)
Haxhimali, Tomorr; Rudd, Robert; Cabot, William; Graziani, Frank
2015-11-01
We present molecular dynamics (MD) calculations of shear viscosity for asymmetric mixed plasma for thermodynamic conditions relevant to astrophysical and Inertial Confinement Fusion plasmas. Specifically, we consider mixtures of deuterium and argon at temperatures of 100-500 eV and a number density of 1025 ions/cc. The motion of 30000-120000 ions is simulated in which the ions interact via the Yukawa (screened Coulomb) potential. The electric field of the electrons is included in this effective interaction. Shear viscosity is calculated using the Green-Kubo approach with an integral of the shear stress autocorrelation function, a quantity calculated in the equilibrium MD simulations. We study different mixtures with increasing fraction of the minority high-Z element (Ar) in the D-Ar plasma mixture. In the more weakly coupled plasmas, at 500 eV and low Ar fractions, results from MD compare very well with Chapman-Enskog kinetic results. We introduce a model that interpolates between a screened-plasma kinetic theory at weak coupling and the Murillo Yukawa viscosity model at higher coupling. This hybrid kinetics-MD viscosity model agrees well with the MD results over the conditions simulated. This work was performed under the auspices of the US Dept. of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.
Solubility enhancement of miconazole nitrate: binary and ternary mixture approach.
Rai, Vineet Kumar; Dwivedi, Harinath; Yadav, Narayan Prasad; Chanotiya, Chandan Singh; Saraf, Shubhini A
2014-08-01
Enhancement of aqueous solubility of very slightly soluble Miconazole Nitrate (MN) is required to widen its application from topical formulation to oral/mucoadhesive formulations. Aim of the present investigation was to enhance the aqueous solubility of MN using binary and ternary mixture approach. Binary mixtures such as solvent deposition, inclusion complexation and solid dispersion were adopted to enhance solubility using different polymers like lactose, beta-cyclodextrin (β-CD) and polyethylene-glycol 6000 (PEG 6000), respectively. Batches of binary mixtures with highest solubility enhancement potentials were further mixed to form ternary mixture by a simple kneading method. Drug polymer interaction and mixture morphology was studied using the Fourier transform infrared spectroscopy and the scanning electron microscopy, respectively along with their saturation solubility studies and drug release. An excellent solubility enhancement, i.e. up to 72 folds and 316 folds of MN was seen by binary and ternary mixture, respectively. Up to 99.5% drug was released in 2 h from the mixtures of MN and polymers. RESULTS revealed that solubility enhancement by binary mixtures is achieved due to surface modification and by increasing wettability of MN. Tremendous increase in solubility of MN by ternary mixture could possibly be due to blending of water soluble polymers, i.e. lactose and PEG 6000 with β-CD which was found to enhance the solubilizing nature of β-CD. Owing to the excellent solubility enhancement potential of ternary mixtures in enhancing MN solubility from 110.4 μg/ml to 57640.0 μg/ml, ternary mixture approach could prove to be promising in the development of oral/mucoadhesive formulations.
NASA Astrophysics Data System (ADS)
Colucci, Simone; de'Michieli Vitturi, Mattia; Landi, Patrizia
2016-04-01
It is well known that nucleation and growth of crystals play a fundamental role in controlling magma ascent dynamics and eruptive behavior. Size- and shape-distribution of crystal populations can affect mixture viscosity, causing, potentially, transitions between effusive and explosive eruptions. Furthermore, volcanic samples are usually characterized in terms of Crystal Size Distribution (CSD), which provide a valuable insight into the physical processes that led to the observed distributions. For example, a large average size can be representative of a slow magma ascent, and a bimodal CSD may indicate two events of nucleation, determined by two degassing events within the conduit. The Method of Moments (MoM), well established in the field of chemical engineering, represents a mesoscopic modeling approach that rigorously tracks the polydispersity by considering the evolution in time and space of integral parameters characterizing the distribution, the moments, by solving their transport differential-integral equations. One important advantage of this approach is that the moments of the distribution correspond to quantities that have meaningful physical interpretations and are directly measurable in natural eruptive products, as well as in experimental samples. For example, when the CSD is defined by the number of particles of size D per unit volume of the magmatic mixture, the zeroth moment gives the total number of crystals, the third moment gives the crystal volume fraction in the magmatic mixture and ratios between successive moments provide different ways to evaluate average crystal length. Tracking these quantities, instead of volume fraction only, will allow using, for example, more accurate viscosity models in numerical code for magma ascent. Here we adopted, for the first time, a quadrature based method of moments to track the temporal evolution of CSD in a magmatic mixture and we verified and calibrated the model again experimental data. We also show how the equations and the tool developed can be integrated in a magma ascent numerical model, with application to eruptive events occurred at Stromboli volcano (Italy).
Using Big Data Analytics to Address Mixtures Exposure
The assessment of chemical mixtures is a complex issue for regulators and health scientists. We propose that assessing chemical co-occurrence patterns and prevalence rates is a relatively simple yet powerful approach in characterizing environmental mixtures and mixtures exposure...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, A K; Weese, R K; Adrzejewski, W J
Accelerated aging tests play an important role in assessing the lifetime of manufactured products. There are two basic approaches to lifetime qualification. One tests a product to failure over range of accelerated conditions to calibrate a model, which is then used to calculate the failure time for conditions of use. A second approach is to test a component to a lifetime-equivalent dose (thermal or radiation) to see if it still functions to specification. Both methods have their advantages and limitations. A disadvantage of the 2nd method is that one does not know how close one is to incipient failure. Thismore » limitation can be mitigated by testing to some higher level of dose as a safety margin, but having a predictive model of failure via the 1st approach provides an additional measure of confidence. Even so, proper calibration of a failure model is non-trivial, and the extrapolated failure predictions are only as good as the model and the quality of the calibration. This paper outlines results for predicting the potential failure point of a system involving a mixture of two energetic materials, HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate). Global chemical kinetic models for the two materials individually and as a mixture are developed and calibrated from a variety of experiments. These include traditional thermal analysis experiments run on time scales from hours to a couple days, detonator aging experiments with exposures up to 50 months, and sealed-tube aging experiments for up to 5 years. Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.« less
Mixture toxicology in the 21st century: Pathway-based concepts and tools
The past decade has witnessed notable evolution of approaches focused on predicting chemical hazards and risks in the absence of empirical data from resource-intensive in vivo toxicity tests. In silico models, in vitro high-throughput toxicity assays, and short-term in vivo tests...
Kim, Seongho; Carruthers, Nicholas; Lee, Joohyoung; Chinni, Sreenivasa; Stemmer, Paul
2016-12-01
Stable isotope labeling by amino acids in cell culture (SILAC) is a practical and powerful approach for quantitative proteomic analysis. A key advantage of SILAC is the ability to simultaneously detect the isotopically labeled peptides in a single instrument run and so guarantee relative quantitation for a large number of peptides without introducing any variation caused by separate experiment. However, there are a few approaches available to assessing protein ratios and none of the existing algorithms pays considerable attention to the proteins having only one peptide hit. We introduce new quantitative approaches to dealing with SILAC protein-level summary using classification-based methodologies, such as Gaussian mixture models with EM algorithms and its Bayesian approach as well as K-means clustering. In addition, a new approach is developed using Gaussian mixture model and a stochastic, metaheuristic global optimization algorithm, particle swarm optimization (PSO), to avoid either a premature convergence or being stuck in a local optimum. Our simulation studies show that the newly developed PSO-based method performs the best among others in terms of F1 score and the proposed methods further demonstrate the ability of detecting potential markers through real SILAC experimental data. No matter how many peptide hits the protein has, the developed approach can be applicable, rescuing many proteins doomed to removal. Furthermore, no additional correction for multiple comparisons is necessary for the developed methods, enabling direct interpretation of the analysis outcomes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Chemical characterization of complex mixtures and assessment of stability over time of the characterized chemicals is crucial both to characterize exposure and to use data from one mixture as a surrogate for other similar mixtures. The chemical composition of test mixtures can va...
1996-01-01
We developed and evaluated a total toxic units modeling approach for predicting mean toxicity as measured in laboratory tests for Great Lakes sediments containing complex mixtures of environmental contaminants (e.g., polychlorinated biphenyls, polycyclic aromatic hydrocarbons, pesticides, chlorinated dioxins, and metals). The approach incorporates equilibrium partitioning and organic carbon control of bioavailability for organic contaminants and acid volatile sulfide (AVS) control for metals, and includes toxic equivalency for planar organic chemicals. A toxic unit is defined as the ratio of the estimated pore-water concentration of a contaminant to the chronic toxicity of that contaminant, as estimated by U.S. Environmental Protection Agency Ambient Water Quality Criteria (AWQC). The toxic unit models we developed assume complete additivity of contaminant effects, are completely mechanistic in form, and were evaluated without any a posteriori modification of either the models or the data from which the models were developed and against which they were tested. A linear relationship between total toxic units, which included toxicity attributable to both iron and un-ionized ammonia, accounted for about 88% of observed variability in mean toxicity; a quadratic relationship accounted for almost 94%. Exclusion of either bioavailability components (i.e., equilibrium partitioning control of organic contaminants and AVS control of metals) or iron from the model substantially decreased its ability to predict mean toxicity. A model based solely on un-ionized ammonia accounted for about 47% of the variability in mean toxicity. We found the toxic unit approach to be a viable method for assessing and ranking the relative potential toxicity of contaminated sediments.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Knežević, Varja; Tunić, Tanja; Gajić, Pero; Marjan, Patricija; Savić, Danko; Tenji, Dina; Teodorović, Ivana
2016-11-01
Recovery after exposure to herbicides-atrazine, isoproturon, and trifluralin-their binary and ternary mixtures, was studied under laboratory conditions using a slightly adapted standard protocol for Lemna minor. The objectives of the present study were (1) to compare empirical to predicted toxicity of selected herbicide mixtures; (2) to assess L. minor recovery potential after exposure to selected individual herbicides and their mixtures; and (3) to suggest an appropriate recovery potential assessment approach and endpoint in a modified laboratory growth inhibition test. The deviation of empirical from predicted toxicity was highest in binary mixtures of dissimilarly acting herbicides. The concentration addition model slightly underestimated mixture effects, indicating potential synergistic interactions between photosynthetic inhibitors (atrazine and isoproturon) and a cell mitosis inhibitor (trifluralin). Recovery after exposure to the binary mixture of atrazine and isoproturon was fast and concentration-independent: no significant differences between relative growth rates (RGRs) in any of the mixtures (IC10 Mix , 25 Mix , and 50 Mix ) versus control level were recorded in the last interval of the recovery phase. The recovery of the plants exposed to binary and ternary mixtures of dissimilarly acting herbicides was strictly concentration-dependent. Only plants exposed to IC10 Mix , regardless of the herbicides, recovered RGRs close to control level in the last interval of the recovery phase. The inhibition of the RGRs in the last interval of the recovery phase compared with the control level is a proposed endpoint that could inform on reversibility of the effects and indicate possible mixture effects on plant population recovery potential.
Advances in Mixed Signal Processing for Regional and Teleseismic Arrays
2006-08-15
1: Mixture of signals from two earthquakes from south of Africa and the Philippines observed at USAEDS long-period seismic array in Korea. Correct...window where the detector will miss valid signals . 2 Approaches to detecting signals on arrays all focus on the basic model that expresses the observed...possible use in detecting infrasound signals . The approach is based on orthogonal- ity properties of the eigen vectors of the spectral matrix under a
Computational Modeling of Supercritical and Transcritical Flows
2017-01-09
Acentric factor I. Introduction Liquid rocket and gas turbine engines operate at high pressures. For gas turbines, the combustor pressurecan be 60 − 100...an approach the liquid gas interface is tracked.4 We note that an overwhelming majority of the computational studies have similarly focused on purely...A standard approach for supercritical flows is to treat the multicomponent mixture of species as a dense fluid using a real gas equation of state such
NASA Astrophysics Data System (ADS)
Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.
2018-05-01
The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.
Diversifying mechanisms in the on-farm evolution of crop mixtures.
Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle
2015-06-01
While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.
Kong, Fanhui; Chen, Yeh-Fong
2016-07-01
By examining the outcome trajectories of the dropout patients with different reasons in the schizophrenia trials, we note that although patients are recruited from the same protocol that have compatible baseline characteristics, they may respond differently even to the same treatment. Some patients show consistent improvement while others only have temporary relief. This creates different patient subpopulations characterized by their response and dropout patterns. At the same time, those who continue to improve seem to be more likely to complete the study while those who only experience temporary relief have a higher chance to drop out. Such phenomenon appears to be quite general in schizophrenia clinical trials. This simultaneous inhomogeneity both in patient response as well as dropout patterns creates a scenario of missing not at random and therefore results in biases when we use the statistical methods based on the missing at random assumption to test treatment efficacy. In this paper, we propose to use the latent class growth mixture model, which is a special case of the latent mixture model, to conduct the statistical analyses in such situation. This model allows us to take the inhomogeneity among subpopulations into consideration to make more accurate inferences on the treatment effect at any visit time. Comparing with the conventional statistical methods such as mixed-effects model for repeated measures, we demonstrate through simulations that the proposed latent mixture model approach gives better control on the Type I error rate in testing treatment effect. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Copyright © 2016 John Wiley & Sons, Ltd.
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
Reschke, Thomas; Zherikova, Kseniya V; Verevkin, Sergey P; Held, Christoph
2016-03-01
Benzoic acid is a model compound for drug substances in pharmaceutical research. Process design requires information about thermodynamic phase behavior of benzoic acid and its mixtures with water and organic solvents. This work addresses phase equilibria that determine stability and solubility. In this work, Perturbed-Chain Statistical Associating Fluid Theory (PC-SAFT) was used to model the phase behavior of aqueous and organic solutions containing benzoic acid and chlorobenzoic acids. Absolute vapor pressures of benzoic acid and 2-, 3-, and 4-chlorobenzoic acid from literature and from our own measurements were used to determine pure-component PC-SAFT parameters. Two binary interaction parameters between water and/or benzoic acid were used to model vapor-liquid and liquid-liquid equilibria of water and/or benzoic acid between 280 and 413 K. The PC-SAFT parameters and 1 binary interaction parameter were used to model aqueous solubility of the chlorobenzoic acids. Additionally, solubility of benzoic acid in organic solvents was predicted without using binary parameters. All results showed that pure-component parameters for benzoic acid and for the chlorobenzoic acids allowed for satisfying modeling phase equilibria. The modeling approach established in this work is a further step to screen solubility and to predict the whole phase region of mixtures containing pharmaceuticals. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben
2017-08-22
A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.
NASA Astrophysics Data System (ADS)
Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal
2017-04-01
The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
A Novel Approach to Turbulence Stimulation for Ship-Model Testing
2010-05-11
surface. These holes were either part of the manufacturing of the plate or unused holes drilled for probes. To fill these holes, an epoxy -based...mixture was used, which was applied over a hole and the surrounding surfaces. Once the epoxy had cured, the model was wet sanded with several different...Suboff model is a generic submarine form developed by using two separate parabolic formulae for the bow and stern sections (Stettler, 2009). The 2-D
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Mixtures and their risk assessment in toxicology.
Mumtaz, Moiz M; Hansen, Hugh; Pohl, Hana R
2011-01-01
For communities generally and for persons living in the vicinity of waste sites specifically, potential exposures to chemical mixtures are genuine concerns. Such concerns often arise from perceptions of a site's higher than anticipated toxicity due to synergistic interactions among chemicals. This chapter outlines some historical approaches to mixtures risk assessment. It also outlines ATSDR's current approach to toxicity risk assessment. The ATSDR's joint toxicity assessment guidance for chemical mixtures addresses interactions among components of chemical mixtures. The guidance recommends a series of steps that include simple calculations for a systematic analysis of data leading to conclusions regarding any hazards chemical mixtures might pose. These conclusions can, in turn, lead to recommendations such as targeted research to fill data gaps, development of new methods using current science, and health education to raise awareness of residents and health care providers. The chapter also provides examples of future trends in chemical mixtures assessment.
Luo, Zhi; Marson, Domenico; Ong, Quy K; Loiudice, Anna; Kohlbrecher, Joachim; Radulescu, Aurel; Krause-Heuer, Anwen; Darwish, Tamim; Balog, Sandor; Buonsanti, Raffaella; Svergun, Dmitri I; Posocco, Paola; Stellacci, Francesco
2018-04-09
The ligand shell (LS) determines a number of nanoparticles' properties. Nanoparticles' cores can be accurately characterized; yet the structure of the LS, when composed of mixture of molecules, can be described only qualitatively (e.g., patchy, Janus, and random). Here we show that quantitative description of the LS' morphology of monodisperse nanoparticles can be obtained using small-angle neutron scattering (SANS), measured at multiple contrasts, achieved by either ligand or solvent deuteration. Three-dimensional models of the nanoparticles' core and LS are generated using an ab initio reconstruction method. Characteristic length scales extracted from the models are compared with simulations. We also characterize the evolution of the LS upon thermal annealing, and investigate the LS morphology of mixed-ligand copper and silver nanoparticles as well as gold nanoparticles coated with ternary mixtures. Our results suggest that SANS combined with multiphase modeling is a versatile approach for the characterization of nanoparticles' LS.
A flamelet model for transcritical LOx/GCH4 flames
NASA Astrophysics Data System (ADS)
Müller, Hagen; Pfitzner, Michael
2017-03-01
This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.
NASA Astrophysics Data System (ADS)
Bala, N.; Napiah, M.; Kamaruddin, I.; Danlami, N.
2018-04-01
In this study, modelling and optimization of materials polyethylene, polypropylene and nanosilica for nanocomposite modified asphalt mixtures has been examined to obtain optimum quantities for higher fatique life. Response Surface Methodology (RSM) was applied for the optimization based on Box Behnken design (BBD). Interaction effects of independent variables polymers and nanosilica on fatique life were evaluated. The result indicates that the individual effects of polymers and nanosilica content are both important. However, the content of nanosilica used has more significant effect on fatique life resistance. Also, the mean error obtained from optimization results is less than 5% for all the responses, this indicates that predicted values are in agreement with experimental results. Furthermore, it was concluded that asphalt mixture design with high performance properties, optimization using RSM is a very effective approach.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
A model for predicting thermal properties of asphalt mixtures from their constituents
NASA Astrophysics Data System (ADS)
Keller, Merlin; Roche, Alexis; Lavielle, Marc
Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Multiple imputation of missing covariates for the Cox proportional hazards cure model
Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G
2016-01-01
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
DOT National Transportation Integrated Search
2015-03-01
Mixture proportioning is routinely a matter of using a recipe based on a previously produced concrete, rather than adjusting the : proportions based on the needs of the mixture and the locally available materials. As budgets grow tighter and increasi...
Vali, Alireza; Abla, Adib A; Lawton, Michael T; Saloner, David; Rayz, Vitaliy L
2017-01-04
In vivo measurement of blood velocity fields and flow descriptors remains challenging due to image artifacts and limited resolution of current imaging methods; however, in vivo imaging data can be used to inform and validate patient-specific computational fluid dynamics (CFD) models. Image-based CFD can be particularly useful for planning surgical interventions in complicated cases such as fusiform aneurysms of the basilar artery, where it is crucial to alter pathological hemodynamics while preserving flow to the distal vasculature. In this study, patient-specific CFD modeling was conducted for two basilar aneurysm patients considered for surgical treatment. In addition to velocity fields, transport of contrast agent was simulated for the preoperative and postoperative conditions using two approaches. The transport of a virtual contrast passively following the flow streamlines was simulated to predict post-surgical flow regions prone to thrombus deposition. In addition, the transport of a mixture of blood with an iodine-based contrast agent was modeled to compare and verify the CFD results with X-ray angiograms. The CFD-predicted patterns of contrast flow were qualitatively compared to in vivo X-ray angiograms acquired before and after the intervention. The results suggest that the mixture modeling approach, accounting for the flow rates and properties of the contrast injection, is in better agreement with the X-ray angiography data. The virtual contrast modeling assessed the residence time based on flow patterns unaffected by the injection procedure, which makes the virtual contrast modeling approach better suited for prediction of thrombus deposition, which is not limited to the peri-procedural state. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kienzler, Aude; Bopp, Stephanie K; van der Linden, Sander; Berggren, Elisabet; Worth, Andrew
2016-10-01
This paper reviews regulatory requirements and recent case studies to illustrate how the risk assessment (RA) of chemical mixtures is conducted, considering both the effects on human health and on the environment. A broad range of chemicals, regulations and RA methodologies are covered, in order to identify mixtures of concern, gaps in the regulatory framework, data needs, and further work to be carried out. Also the current and potential future use of novel tools (Adverse Outcome Pathways, in silico tools, toxicokinetic modelling, etc.) in the RA of combined effects were reviewed. The assumptions made in the RA, predictive model specifications and the choice of toxic reference values can greatly influence the assessment outcome, and should therefore be specifically justified. Novel tools could support mixture RA mainly by providing a better understanding of the underlying mechanisms of combined effects. Nevertheless, their use is currently limited because of a lack of guidance, data, and expertise. More guidance is needed to facilitate their application. As far as the authors are aware, no prospective RA concerning chemicals related to various regulatory sectors has been performed to date, even though numerous chemicals are registered under several regulatory frameworks. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jesenska, Sona; Liess, Mathias; Schäfer, Ralf; Beketov, Mikhail; Blaha, Ludek
2013-04-01
Species sensitivity distribution (SSD) is statistical method broadly used in the ecotoxicological risk assessment of chemicals. Originally it has been used for prospective risk assessment of single substances but nowadays it is becoming more important also in the retrospective risk assessment of mixtures, including the catchment scale. In the present work, SSD predictions (impacts of mixtures consisting of 25 pesticides; data from several catchments in Germany, France and Finland) were compared with SPEAR-pesticides, which a bioindicator index based on biological traits responsive to the effects of pesticides and post-contamination recovery. The results showed statistically significant correlations (Pearson's R, p<0.01) between SSD (predicted msPAF values) and values of SPEAR-pesticides (based on field biomonitoring observations). Comparisons of the thresholds established for the SSD and SPEAR approaches (SPEAR-pesticides=45%, i.e. LOEC level, and msPAF = 0.05 for SSD, i.e. HC5) showed that use of chronic toxicity data significantly improved the agreement between the two methods but the SPEAR-pesticides index was still more sensitive. Taken together, the validation study shows good potential of SSD models in predicting the real impacts of micropollutant mixtures on natural communities of aquatic biota.
Chemical Mixture Risk Assessment Additivity-Based Approaches
Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.
Weybright, Elizabeth H; Caldwell, Linda L; Ram, Nilam; Smith, Edward A; Wegner, Lisa
2016-06-01
Considerable heterogeneity exists in adolescent substance use development. To most effectively prevent use, distinct trajectories of use must be identified as well as differential associations with predictors of use, such as leisure experience. The current study used a person-centered approach to identify distinct substance use trajectories and how leisure is associated with trajectory classes. Data came from a larger efficacy trial of 2.249 South African high school students who reported substance use at any time across 8 waves. Growth mixture modeling was used to identify developmental trajectories of substance use and the influence of healthy leisure. Results identified three increasing and one stable substance use trajectory and subjective healthy leisure served to protect against use. This study is the first of its kind to focus on a sample of South African adolescents and serves to develop a richer understanding of substance use development and the role of healthy leisure. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
A general mixture equation of state for double bonding carboxylic acids with ≥2 association sites
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.
2018-05-01
In this paper, we obtain the first general multi-component solution to Wertheim's thermodynamic perturbation theory for the case that molecules can participate in cyclic double bonds. In contrast to previous authors, we do not restrict double bonding molecules to a 2-site association scheme. Each molecule in a multi-component mixture can have an arbitrary number of donor and acceptor association sites. The one restriction on the theory is that molecules can have at most one pair of double bonding sites. We also incorporate the effect of hydrogen bond cooperativity in cyclic double bonds. We then apply this new association theory to 2-site and 3-site models for carboxylic acids within the polar perturbed chain statistical associating fluid theory equation of state. We demonstrate the accuracy of the approach by comparison to both pure and multi-component phase equilibria data. It is demonstrated that the 3-site association model gives substantially a different hydrogen bonding structure than a 2-site approach. We also demonstrate that inclusion of hydrogen bond cooperativity has a substantial effect on a liquid phase hydrogen bonding structure.
Computational study of sheath structure in oxygen containing plasmas at medium pressures
NASA Astrophysics Data System (ADS)
Hrach, Rudolf; Novak, Stanislav; Ibehej, Tomas; Hrachova, Vera
2016-09-01
Plasma mixtures containing active species are used in many plasma-assisted material treatment technologies. The analysis of such systems is rather difficult, as both physical and chemical processes affect plasma properties. A combination of experimental and computational approaches is the best suited, especially at higher pressures and/or in chemically active plasmas. The first part of our study of argon-oxygen mixtures was based on experimental results obtained in the positive column of DC glow discharge. The plasma was analysed by the macroscopic kinetic approach which is based on the set of chemical reactions in the discharge. The result of this model is a time evolution of the number densities of each species. In the second part of contribution the detailed analysis of processes taking place during the interaction of oxygen containing plasma with immersed substrates was performed, the results of the first model being the input parameters. The used method was the particle simulation technique applied to multicomponent plasma. The sheath structure and fluxes of charged particles to substrates were analysed in the dependence on plasma pressure, plasma composition and surface geometry.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Stavenes Andersen, Ingrid; Voie, Oyvind Albert; Fonnum, Frode; Mariussen, Espen
2009-11-01
Regulatory limit values for toxicants are in general determined by the toxicology of the single compounds. However, little is known about their combined effects. Methyl mercury (MeHg), polychlorinated biphenyls (PCBs), and brominated flame retardants (BFRs) are dominant contaminants in the environment and food. MeHg is a well known neurotoxicant, especially affecting the developing brain. There is increasing evidence that PCB and BFRs also have neurotoxic effects. An enhanced effect of these toxicants, due to either synergistic or additive effects, would be considered as a risk for the fetal development. Here we studied the combinatorial effects of MeHg in combination with PCB or BFR on the reuptake of glutamate in synaptosomes. To provide the optimal conclusion regarding type of interaction, we have analyzed the data using two mathematical models, the Löewe model of additivity and Bliss' model of independent action. Binary and ternary mixtures in different proportions were made. The toxicants had primarily additive effects, as shown with both models, although tendencies towards synergism were observed. MeHg was by far the most potent inhibitor of uptake with an EC(50) value of 0.33 microM. A reconstituted mixture from a relevant fish sample was made in order to elucidate which chemical was responsible for the observed effect. Some interaction was experienced between PCB and MeHg, but in general MeHg seemed to explain the observed effect. We also show that mixture effects should not be assessed by effect addition.
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
Chen, Jiacong; Liu, Jingyong; He, Yao; Huang, Limao; Sun, Shuiyu; Sun, Jian; Chang, KenLin; Kuo, Jiahong; Huang, Shaosong; Ning, Xunan
2017-02-01
Artificial neural network (ANN) modeling was applied to thermal data obtained by non-isothermal thermogravimetric analysis (TGA) from room temperature to 1000°C at three different heating rates in air to predict the TG curves of sewage sludge (SS) and coffee grounds (CG) mixtures. A good agreement between experimental and predicted data verified the accuracy of the ANN approach. The results of co-combustion showed that there were interactions between SS and CG, and the impacts were mostly positive. With the addition of CG, the mass loss rate and the reactivity of SS were increased while charring was reduced. Measured activation energies (E a ) determined by the Kissinger-Akahira-Sunose (KAS) and Ozawa-Flynn-Wall (OFW) methods deviated by <5%. The average value of E a (166.8kJ/mol by KAS and 168.8kJ/mol by OFW, respectively) was the lowest when the fraction of CG in the mixture was 40%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Effect of inorganic salts on the volatility of organic acids.
Häkkinen, Silja A K; McNeill, V Faye; Riipinen, Ilona
2014-12-02
Particulate phase reactions between organic and inorganic compounds may significantly alter aerosol chemical properties, for example, by suppressing particle volatility. Here, chemical processing upon drying of aerosols comprised of organic (acetic, oxalic, succinic, or citric) acid/monovalent inorganic salt mixtures was assessed by measuring the evaporation of the organic acid molecules from the mixture using a novel approach combining a chemical ionization mass spectrometer coupled with a heated flow tube inlet (TPD-CIMS) with kinetic model calculations. For reference, the volatility, i.e. saturation vapor pressure and vaporization enthalpy, of the pure succinic and oxalic acids was also determined and found to be in agreement with previous literature. Comparison between the kinetic model and experimental data suggests significant particle phase processing forming low-volatility material such as organic salts. The results were similar for both ammonium sulfate and sodium chloride mixtures, and relatively more processing was observed with low initial aerosol organic molar fractions. The magnitude of low-volatility organic material formation at an atmospherically relevant pH range indicates that the observed phenomenon is not only significant in laboratory conditions but is also of direct atmospheric relevance.
Evaluation of Student Performance through a Multidimensional Finite Mixture IRT Model.
Bacci, Silvia; Bartolucci, Francesco; Grilli, Leonardo; Rampichini, Carla
2017-01-01
In the Italian academic system, a student can enroll for an exam immediately after the end of the teaching period or can postpone it; in this second case the exam result is missing. We propose an approach for the evaluation of a student performance throughout the course of study, accounting also for nonattempted exams. The approach is based on an item response theory model that includes two discrete latent variables representing student performance and priority in selecting the exams to take. We explicitly account for nonignorable missing observations as the indicators of attempted exams also contribute to measure the performance (within-item multidimensionality). The model also allows for individual covariates in its structural part.
High/variable mixture ratio O2/H2 engine
NASA Technical Reports Server (NTRS)
Adams, A.; Parsley, R. C.
1988-01-01
Vehicle/engine analysis studies have identified the High/Dual Mixture Ratio O2/H2 Engine cycle as a leading candidate for an advanced Single Stage to Orbit (SSTO) propulsion system. This cycle is designed to allow operation at a higher than normal O/F ratio of 12 during liftoff and then transition to a more optimum O/F ratio of 6 at altitude. While operation at high mixture ratios lowers specific impulse, the resultant high propellant bulk density and high power density combine to minimize the influence of atmospheric drag and low altitude gravitational forces. Transition to a lower mixture ratio at altitude then provides improved specific impulse relative to a single mixture ratio engine that must select a mixture ratio that is balanced for both low and high altitude operation. This combination of increased altitude specific impulse and high propellant bulk density more than offsets the compromised low altitude performance and results in an overall mission benefit. Two areas of technical concern relative to the execution of this dual mixture ratio cycle concept are addressed. First, actions required to transition from high to low mixture ratio are examined, including an assessment of the main chamber environment as the main chamber mixture ratio passes through stoichiometric. Secondly, two approaches to meet a requirement for high turbine power at high mixture ratio condition are examined. One approach uses high turbine temperature to produce the power and requires cooled turbines. The other approach incorporates an oxidizer-rich preburner to increase turbine work capability via increased turbine mass flow.
Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data
Pleydell, David R.J.; Chrétien, Stéphane
2009-01-01
A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331
Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.
Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.
Application of the QSPR approach to the boiling points of azeotropes.
Katritzky, Alan R; Stoyanova-Slavova, Iva B; Tämm, Kaido; Tamm, Tarmo; Karelson, Mati
2011-04-21
CODESSA Pro derivative descriptors were calculated for a data set of 426 azeotropic mixtures by the centroid approximation and the weighted-contribution-factor approximation. The two approximations produced almost identical four-descriptor QSPR models relating the structural characteristic of the individual components of azeotropes to the azeotropic boiling points. These models were supported by internal and external validations. The descriptors contributing to the QSPR models are directly related to the three components of the enthalpy (heat) of vaporization.
Establishment of a center of excellence for applied mathematical and statistical research
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
Ha, Dong -Gwang; Kim, Jang -Joo; Baldo, Marc A.
2016-04-29
Mixed host compositions that combine charge transport materials with luminescent dyes offer superior control over exciton formation and charge transport in organic light emitting devices (OLEDs). Two approaches are typically used to optimize the fraction of charge transport materials in a mixed host composition: either an empirical percolative model, or a hopping transport model. We show that these two commonly-employed models are linked by an analytic expression which relates the localization length to the percolation threshold and critical exponent. The relation is confirmed both numerically and experimentally through measurements of the relative conductivity of Tris(4-carbazoyl-9-ylphenyl) amine (TCTA) :1,3-bis(3,5-dipyrid-3-yl-phenyl) benzene (BmPyPb)more » mixtures with different concentrations, where the TCTA plays a role as hole conductor and the BmPyPb as hole insulator. Furthermore, the analytic relation may allow the rational design of mixed layers of small molecules for high-performance OLEDs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paricaud, P.
2015-07-28
A simple modification of the Boublík-Mansoori-Carnahan-Starling-Leland equation of state is proposed for an application to the metastable disordered region. The new model has a positive pole at the jamming limit and can accurately describe the molecular simulation data of pure hard in the stable fluid region and along the metastable branch. The new model has also been applied to binary mixtures hard spheres, and an excellent description of the fluid and metastable branches can be obtained by adjusting the jamming packing fraction. The new model for hard sphere mixtures can be used as the repulsive term of equations of statemore » for real fluids. In this case, the modified equations of state give very similar predictions of thermodynamic properties as the original models, and one can remove the multiple liquid density roots observed for some versions of the Statistical Associating Fluid Theory (SAFT) at low temperature without any modification of the dispersion term.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Dong-Gwang; Kim, Jang-Joo; Baldo, Marc A.
2016-04-01
Mixed host compositions that combine charge transport materials with luminescent dyes offer superior control over exciton formation and charge transport in organic light emitting devices (OLEDs). Two approaches are typically used to optimize the fraction of charge transport materials in a mixed host composition: either an empirical percolative model, or a hopping transport model. We show that these two commonly-employed models are linked by an analytic expression which relates the localization length to the percolation threshold and critical exponent. The relation is confirmed both numerically and experimentally through measurements of the relative conductivity of Tris(4-carbazoyl-9-ylphenyl)amine (TCTA) :1,3-bis(3,5-dipyrid-3-yl-phenyl)benzene (BmPyPb) mixtures withmore » different concentrations, where the TCTA plays a role as hole conductor and the BmPyPb as hole insulator. The analytic relation may allow the rational design of mixed layers of small molecules for high-performance OLEDs.« less
Cancer heterogeneity and multilayer spatial evolutionary games.
Świerniak, Andrzej; Krześlak, Michał
2016-10-13
Evolutionary game theory (EGT) has been widely used to simulate tumour processes. In almost all studies on EGT models analysis is limited to two or three phenotypes. Our model contains four main phenotypes. Moreover, in a standard approach only heterogeneity of populations is studied, while cancer cells remain homogeneous. A multilayer approach proposed in this paper enables to study heterogeneity of single cells. In the extended model presented in this paper we consider four strategies (phenotypes) that can arise by mutations. We propose multilayer spatial evolutionary games (MSEG) played on multiple 2D lattices corresponding to the possible phenotypes. It enables simulation and investigation of heterogeneity on the player-level in addition to the population-level. Moreover, it allows to model interactions between arbitrary many phenotypes resulting from the mixture of basic traits. Different equilibrium points and scenarios (monomorphic and polymorphic populations) have been achieved depending on model parameters and the type of played game. However, there is a possibility of stable quadromorphic population in MSEG games for the same set of parameters like for the mean-field game. The model assumes an existence of four possible phenotypes (strategies) in the population of cells that make up tumour. Various parameters and relations between cells lead to complex analysis of this model and give diverse results. One of them is a possibility of stable coexistence of different tumour cells within the population, representing almost arbitrary mixture of the basic phenotypes. This article was reviewed by Tomasz Lipniacki, Urszula Ledzewicz and Jacek Banasiak.
A clustering package for nucleotide sequences using Laplacian Eigenmaps and Gaussian Mixture Model.
Bruneau, Marine; Mottet, Thierry; Moulin, Serge; Kerbiriou, Maël; Chouly, Franz; Chretien, Stéphane; Guyeux, Christophe
2018-02-01
In this article, a new Python package for nucleotide sequences clustering is proposed. This package, freely available on-line, implements a Laplacian eigenmap embedding and a Gaussian Mixture Model for DNA clustering. It takes nucleotide sequences as input, and produces the optimal number of clusters along with a relevant visualization. Despite the fact that we did not optimise the computational speed, our method still performs reasonably well in practice. Our focus was mainly on data analytics and accuracy and as a result, our approach outperforms the state of the art, even in the case of divergent sequences. Furthermore, an a priori knowledge on the number of clusters is not required here. For the sake of illustration, this method is applied on a set of 100 DNA sequences taken from the mitochondrially encoded NADH dehydrogenase 3 (ND3) gene, extracted from a collection of Platyhelminthes and Nematoda species. The resulting clusters are tightly consistent with the phylogenetic tree computed using a maximum likelihood approach on gene alignment. They are coherent too with the NCBI taxonomy. Further test results based on synthesized data are then provided, showing that the proposed approach is better able to recover the clusters than the most widely used software, namely Cd-hit-est and BLASTClust. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Spectral mixture analyses of hyperspectral data acquired using a tethered balloon
Chen, Xuexia; Vierling, Lee
2006-01-01
Tethered balloon remote sensing platforms can be used to study radiometric issues in terrestrial ecosystems by effectively bridging the spatial gap between measurements made on the ground and those acquired via airplane or satellite. In this study, the Short Wave Aerostat-Mounted Imager (SWAMI) tethered balloon-mounted platform was utilized to evaluate linear and nonlinear spectral mixture analysis (SMA) for a grassland-conifer forest ecotone during the summer of 2003. Hyperspectral measurement of a 74-m diameter ground instantaneous field of view (GIFOV) attained by the SWAMI was studied. Hyperspectral spectra of four common endmembers, bare soil, grass, tree, and shadow, were collected in situ, and images captured via video camera were interpreted into accurate areal ground cover fractions for evaluating the mixture models. The comparison between the SWAMI spectrum and the spectrum derived by combining in situ spectral data with video-derived areal fractions indicated that nonlinear effects occurred in the near infrared (NIR) region, while nonlinear influences were minimal in the visible region. The evaluation of hyperspectral and multispectral mixture models indicated that nonlinear mixture model-derived areal fractions were sensitive to the model input data, while the linear mixture model performed more stably. Areal fractions of bare soil were overestimated in all models due to the increased radiance of bare soil resulting from side scattering of NIR radiation by adjacent grass and trees. Unmixing errors occurred mainly due to multiple scattering as well as close endmember spectral correlation. In addition, though an apparent endmember assemblage could be derived using linear approaches to yield low residual error, the tree and shade endmember fractions calculated using this technique were erroneous and therefore separate treatment of endmembers subject to high amounts of multiple scattering (i.e. shadows and trees) must be done with caution. Including the short wave infrared (SWIR) region in the hyperspectral and multispectral endmember data significantly reduced the Pearson correlation coefficient values among endmember spectra. Therefore, combination of visible, NIR, and SWIR information is likely to further improve the utility of SMA in understanding ecosystem structure and function and may help narrow uncertainties when utilizing remotely sensed data to extrapolate trace glas flux measurements from the canopy scale to the landscape scale.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
A hybrid formulation for the numerical simulation of condensed phase explosives
NASA Astrophysics Data System (ADS)
Michael, L.; Nikiforakis, N.
2016-07-01
In this article we present a new formulation and an associated numerical algorithm, for the simulation of combustion and transition to detonation of condensed-phase commercial- and military-grade explosives, which are confined by (or in general interacting with one or more) compliant inert materials. Examples include confined rate-stick problems and interaction of shock waves with gas cavities or solid particles in explosives. This formulation is based on an augmented Euler approach to account for the mixture of the explosive and its products, and a multi-phase diffuse interface approach to solve for the immiscible interaction between the mixture and the inert materials, so it is in essence a hybrid (augmented Euler and multi-phase) model. As such, it has many of the desirable features of the two approaches and, critically for our applications of interest, it provides the accurate recovery of temperature fields across all components. Moreover, it conveys a lot more physical information than augmented Euler, without the complexity of full multi-phase Baer-Nunziato-type models or the lack of robustness of augmented Euler models in the presence of more than two components. The model can sustain large density differences across material interfaces without the presence of spurious oscillations in velocity and pressure, and it can accommodate realistic equations of state and arbitrary (pressure- or temperature-based) reaction-rate laws. Under certain conditions, we show that the formulation reduces to well-known augmented Euler or multi-phase models, which have been extensively validated and used in practice. The full hybrid model and its reduced forms are validated against problems with exact (or independently-verified numerical) solutions and evaluated for robustness for rate-stick and shock-induced cavity collapse case-studies.
Evaluation of DNA mixtures from database search.
Chung, Yuk-Ka; Hu, Yue-Qing; Fung, Wing K
2010-03-01
With the aim of bridging the gap between DNA mixture analysis and DNA database search, a novel approach is proposed to evaluate the forensic evidence of DNA mixtures when the suspect is identified by the search of a database of DNA profiles. General formulae are developed for the calculation of the likelihood ratio for a two-person mixture under general situations including multiple matches and imperfect evidence. The influence of the prior probabilities on the weight of evidence under the scenario of multiple matches is demonstrated by a numerical example based on Hong Kong data. Our approach is shown to be capable of presenting the forensic evidence of DNA mixtures in a comprehensive way when the suspect is identified through database search.
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
NASA Astrophysics Data System (ADS)
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.
NASA Astrophysics Data System (ADS)
Nascimento, Luis Alberto Herrmann do
This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Brown, Colin D.; de Zwart, Dick; Diamond, Jerome; Dyer, Scott D.; Holmes, Christopher M.; Marshall, Stuart; Burton, G. Allen
2018-01-01
Abstract Ecological risk assessment increasingly focuses on risks from chemical mixtures and multiple stressors because ecosystems are commonly exposed to a plethora of contaminants and nonchemical stressors. To simplify the task of assessing potential mixture effects, we explored 3 land use–related chemical emission scenarios. We applied a tiered methodology to judge the implications of the emissions of chemicals from agricultural practices, domestic discharges, and urban runoff in a quantitative model. The results showed land use–dependent mixture exposures, clearly discriminating downstream effects of land uses, with unique chemical “signatures” regarding composition, concentration, and temporal patterns. Associated risks were characterized in relation to the land‐use scenarios. Comparisons to measured environmental concentrations and predicted impacts showed relatively good similarity. The results suggest that the land uses imply exceedances of regulatory protective environmental quality standards, varying over time in relation to rain events and associated flow and dilution variation. Higher‐tier analyses using ecotoxicological effect criteria confirmed that species assemblages may be affected by exposures exceeding no‐effect levels and that mixture exposure could be associated with predicted species loss under certain situations. The model outcomes can inform various types of prioritization to support risk management, including a ranking across land uses as a whole, a ranking on characteristics of exposure times and frequencies, and various rankings of the relative role of individual chemicals. Though all results are based on in silico assessments, the prospective land use–based approach applied in the present study yields useful insights for simplifying and assessing potential ecological risks of chemical mixtures and can therefore be useful for catchment‐management decisions. Environ Toxicol Chem 2018;37:715–728. © 2017 The Authors. Environmental Toxicology Chemistry Published by Wiley Periodicals, Inc. PMID:28845901
A mixed model framework for teratology studies.
Braeken, Johan; Tuerlinckx, Francis
2009-10-01
A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.
Lakshmi, KS; Lakshmi, S
2010-01-01
Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found. PMID:21331198
Lakshmi, Ks; Lakshmi, S
2010-01-01
Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found.
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand
2010-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand
2009-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.
NASA Astrophysics Data System (ADS)
Lyubimova, T. P.; Zubova, N. A.
2017-06-01
This paper presents the results of numerical simulation of the Soret-induced convection of ternary mixture in the rectangular cavity elongated in horizontal direction in gravity field. The cavity has rigid impermeable boundaries. It is heated from the bellow and undergoes translational linearly polarized vibrations of finite amplitude and frequency in the horizontal direction. The problem is solved by finite difference method in the framework of full unsteady non-linear approach. The procedure of diagonalization of the molecular diffusion coefficient matrix is applied, allowing to eliminate cross-diffusion components in the equations and to reduce the number of the governing parameters. The calculations are performed for model ternary mixture with positive separation ratios of the components. The data on the vibration effect on temporal evolution of instantaneous and average fields and integral characteristics of the flow and heat and mass transfer at different levels of gravity are obtained.
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Particle Filtering for Model-Based Anomaly Detection in Sensor Networks
NASA Technical Reports Server (NTRS)
Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon
2012-01-01
A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.
Detecting interaction in chemical mixtures can be complicated by differences in the shapes of the dose-response curves of the individual components (e.g. mixtures of full and partial agonists with differing response maxima). We present an analysis scheme where flexible single che...
Dissecting psychiatric spectrum disorders by generative embedding☆☆☆
Brodersen, Kay H.; Deserno, Lorenz; Schlagenhauf, Florian; Lin, Zhihao; Penny, Will D.; Buhmann, Joachim M.; Stephan, Klaas E.
2013-01-01
This proof-of-concept study examines the feasibility of defining subgroups in psychiatric spectrum disorders by generative embedding, using dynamical system models which infer neuronal circuit mechanisms from neuroimaging data. To this end, we re-analysed an fMRI dataset of 41 patients diagnosed with schizophrenia and 42 healthy controls performing a numerical n-back working-memory task. In our generative-embedding approach, we used parameter estimates from a dynamic causal model (DCM) of a visual–parietal–prefrontal network to define a model-based feature space for the subsequent application of supervised and unsupervised learning techniques. First, using a linear support vector machine for classification, we were able to predict individual diagnostic labels significantly more accurately (78%) from DCM-based effective connectivity estimates than from functional connectivity between (62%) or local activity within the same regions (55%). Second, an unsupervised approach based on variational Bayesian Gaussian mixture modelling provided evidence for two clusters which mapped onto patients and controls with nearly the same accuracy (71%) as the supervised approach. Finally, when restricting the analysis only to the patients, Gaussian mixture modelling suggested the existence of three patient subgroups, each of which was characterised by a different architecture of the visual–parietal–prefrontal working-memory network. Critically, even though this analysis did not have access to information about the patients' clinical symptoms, the three neurophysiologically defined subgroups mapped onto three clinically distinct subgroups, distinguished by significant differences in negative symptom severity, as assessed on the Positive and Negative Syndrome Scale (PANSS). In summary, this study provides a concrete example of how psychiatric spectrum diseases may be split into subgroups that are defined in terms of neurophysiological mechanisms specified by a generative model of network dynamics such as DCM. The results corroborate our previous findings in stroke patients that generative embedding, compared to analyses of more conventional measures such as functional connectivity or regional activity, can significantly enhance both the interpretability and performance of computational approaches to clinical classification. PMID:24363992
Pulley, S; Collins, A L
2018-09-01
The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Maximum workplace concentration values and carcinogenicity classification for mixtures.
Bartsch, R; Forderkunz, S; Reuter, U; Sterzl-Eckert, H; Greim, H
1998-01-01
In Germany, the Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK Commission) generally sets maximum workplace concentration values (i.e., a proposed occupational exposure level [OEL]) for single substances, not for mixtures. For mixtures containing substances with a genotoxic and carcinogenic potential, the commission considered it scientifically inappropriate to establish a safe threshold. This approach is currently under discussion. Carcinogenic mixtures are categorized according to either the carcinogenicity of the mixture or the classification of the carcinogenic substances included. In regulating exposure to mixtures, an approach similar to that used by the American Conference of Governmental Hygienists is proposed: For components with the same target organ and mode of action or interfering metabolism, synergistic effects must be expected and the respective OELs must be lowered. However, if there is proof that the components act independently, the OELs of the individual compounds are not considered to be modified. In the view of the commission, calculating OELs for solvent mixtures according to their liquid phase composition is not justified, and the setting of scientifically based OELs for complex mixtures is not possible. PMID:9860883
Segmentation and intensity estimation of microarray images using a gamma-t mixture model.
Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J
2007-02-15
We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel
2017-04-01
Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.
NASA Astrophysics Data System (ADS)
Boettcher, Philipp Andreas
Accidental ignition of flammable gases is a critical safety concern in many industrial applications. Particularly in the aviation industry, the main areas of concern on an aircraft are the fuel tank and adjoining regions, where spilled fuel has a high likelihood of creating a flammable mixture. To this end, a fundamental understanding of the ignition phenomenon is necessary in order to develop more accurate test methods and standards as a means of designing safer air vehicles. The focus of this work is thermal ignition, particularly auto-ignition with emphasis on the effect of heating rate, hot surface ignition and flame propagation, and puffing flames. Combustion of hydrocarbon fuels is traditionally separated into slow reaction, cool flame, and ignition regimes based on pressure and temperature. Standard tests, such as the ASTM E659, are used to determine the lowest temperature required to ignite a specific fuel mixed with air at atmospheric pressure. It is expected that the initial pressure and the rate at which the mixture is heated also influences the limiting temperature and the type of combustion. This study investigates the effect of heating rate, between 4 and 15 K/min, and initial pressure, in the range of 25 to 100 kPa, on ignition of n-hexane air mixtures. Mixtures with equivalence ratio ranging from 0.6 to 1.2 were investigated. The problem is also modeled computationally using an extension of Semenov's classical auto-ignition theory with a detailed chemical mechanism. Experiments and simulations both show that in the same reactor either a slow reaction or an ignition event can take place depending on the heating rate. Analysis of the detailed chemistry demonstrates that a mixture which approaches the ignition region slowly undergoes a significant modification of its composition. This change in composition induces a progressive shift of the explosion limit until the mixture is no longer flammable. A mixture that approaches the ignition region sufficiently rapidly undergoes only a moderate amount of thermal decomposition and explodes quite violently. This behavior can also be captured and analyzed using a one-step reaction model, where the heat release is in competition with the depletion of reactants. Hot surface ignition is examined using a glow plug or heated nickel element in a series of premixed n-hexane air mixtures. High-speed schlieren photography, a thermocouple, and a fast response pressure transducer are used to record flame characteristics such as ignition temperature, flame speed, pressure rises, and combustion mode. The ignition event is captured by considering the dominant balance of diffusion and chemical reaction that occurs near a hot surface. Experiments and models show a dependence of ignition temperature on mixture composition, initial pressure, and hot surface size. The mixtures exhibit the known lower flammability limit where the maximum temperature of the hot surface was insufficient at igniting the mixture. Away from the lower flammability limit, the ignition temperature drops to an almost constant value over a wide range of equivalence ratios (0.7 to 2.8) with large variations as the upper flammability limit is approached. Variations in the initial pressure and equivalence ratio also give rise to different modes of combustion: single flame, re-ignition, and puffing flames. These results are successfully compared to computational results obtained using a flamelet model and a detailed chemical mechanism for n-heptane. These different regimes can be delineated by considering the competition between inertia, i.e., flame propagation, and buoyancy, which can be expressed in the Richardson number. In experiments of hot surface ignition and subsequent flame propagation a 10 Hz puffing flame instability is visible in mixtures that are stagnant and premixed prior to the ignition sequence. By varying the size of the hot surface, power input, and combustion vessel volume, we determined that the instability is a function of the interaction of the flame with the fluid flow induced by the combustion products rather than the initial plume established by the hot surface. The phenomenon is accurately reproduced in numerical simulations and a detailed flow field analysis revealed a competition between the inflow velocity at the base of the flame and the flame propagation speed. The increasing inflow velocity, which exceeds the flame propagation speed, is ultimately responsible for creating a puff. The puff is then accelerated upward, allowing for the creation of the subsequent instabilities. The frequency of the puffing is proportional to the gravitational acceleration and inversely proportional to the flame speed. We propose a relation describing the dependence of the frequency on gravitational acceleration, hot surface diameter, and flame speed. This relation shows good agreement for lean and rich n-hexane-air as well as lean hydrogen-air flames.
Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach
Bar, Haim Y.; Lillard, Dean R.
2012-01-01
When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577
Site remedy and closure decisions are made from a mixture of site data, literature values, and model results. Often assessment of this information is difficult for State Agency case managers because conventional approaches to site characterization do not yield a clear and strai...
A Mixture-Model Approach to Linking ADHD to Adolescent Onset of Illicit Drug Use
ERIC Educational Resources Information Center
Malone, Patrick S.; Van Eck, Kathryn; Flory, Kate; Lamis, Dorian A.
2010-01-01
Prior research findings have been mixed as to whether attention-deficit/hyperactivity disorder (ADHD) is related to illicit drug use independent of conduct problems (CP). With the current study, the authors add to this literature by investigating the association between trajectories of ADHD symptoms across childhood and adolescence and onset of…
ERIC Educational Resources Information Center
Georgiades, Stelios; Szatmari, Peter; Boyle, Michael; Hanna, Steven; Duku, Eric; Zwaigenbaum, Lonnie; Bryson, Susan; Fombonne, Eric; Volden, Joanne; Mirenda, Pat; Smith, Isabel; Roberts, Wendy; Vaillancourt, Tracy; Waddell, Charlotte; Bennett, Teresa; Thompson, Ann
2013-01-01
Background: Autism spectrum disorder (ASD) is characterized by notable phenotypic heterogeneity, which is often viewed as an obstacle to the study of its etiology, diagnosis, treatment, and prognosis. On the basis of empirical evidence, instead of three binary categories, the upcoming edition of the DSM 5 will use two dimensions--social…
ERIC Educational Resources Information Center
Eagle, Rose F.; Romanczyk, Raymond G.; Lenzenweger, Mark F.
2010-01-01
The heterogeneity found in autism and related disorders (i.e., "autism spectrum disorders") is widely acknowledged. Even within a specific disorder, such as Autistic Disorder, the range in abilities and clinical presentation is broad. The heterogeneity observed has prompted many researchers to propose subtypes beyond the commonly used DSM-IV-TR…
Vallotton, Nathalie; Price, Paul S
2016-05-17
This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Considering the cumulative risk of mixtures of chemicals – A challenge for policy makers
2012-01-01
Background The current paradigm for the assessment of the health risk of chemical substances focuses primarily on the effects of individual substances for determining the doses of toxicological concern in order to inform appropriately the regulatory process. These policy instruments place varying requirements on health and safety data of chemicals in the environment. REACH focuses on safety of individual substances; yet all the other facets of public health policy that relate to chemical stressors put emphasis on the effects of combined exposure to mixtures of chemical and physical agents. This emphasis brings about methodological problems linked to the complexity of the respective exposure pathways; the effect (more complex than simple additivity) of mixtures (the so-called 'cocktail effect'); dose extrapolation, i.e. the extrapolation of the validity of dose-response data to dose ranges that extend beyond the levels used for the derivation of the original dose-response relationship; the integrated use of toxicity data across species (including human clinical, epidemiological and biomonitoring data); and variation in inter-individual susceptibility associated with both genetic and environmental factors. Methods In this paper we give an overview of the main methodologies available today to estimate the human health risk of environmental chemical mixtures, ranging from dose addition to independent action, and from ignoring interactions among the mixture constituents to modelling their biological fate taking into account the biochemical interactions affecting both internal exposure and the toxic potency of the mixture. Results We discuss their applicability, possible options available to policy makers and the difficulties and potential pitfalls in implementing these methodologies in the frame of the currently existing policy framework in the European Union. Finally, we suggest a pragmatic solution for policy/regulatory action that would facilitate the evaluation of the health effects of chemical mixtures in the environment and consumer products. Conclusions One universally applicable methodology does not yet exist. Therefore, a pragmatic, tiered approach to regulatory risk assessment of chemical mixtures is suggested, encompassing (a) the use of dose addition to calculate a hazard index that takes into account interactions among mixture components; and (b) the use of the connectivity approach in data-rich situations to integrate mechanistic knowledge at different scales of biological organization. PMID:22759500
Ssekandi, W; Mulumba, J W; Colangelo, P; Nankya, R; Fadda, C; Karungi, J; Otim, M; De Santis, P; Jarvis, D I
The bean fly ( Ophiomyia spp.) is considered the most economically damaging field insect pest of common beans in Uganda. Despite the use of existing pest management approaches, reported damage has remained high. Forty-eight traditional and improved common bean varieties currently grown in farmers' fields were evaluated for resistance against bean fly. Data on bean fly incidence, severity and root damage from bean stem maggot were collected. Generalized linear mixed model (GLMM) revealed significant resistance to bean fly in the Ugandan traditional varieties. A popular resistant traditional variety and a popular susceptible commercial variety were selected from the 48 varieties and evaluated in pure and mixed stands. The incidence of bean fly infestation on both varieties in mixtures with different arrangements (systematic random versus rows), and different proportions within each of the two arrangements, was measured and analysed using GLMMs. The proportion of resistant varieties in a mixture and the arrangement type significantly decreased bean fly damage compared to pure stands, with the highest decrease in damage registered in the systematic random mixture with at least 50 % of resistant variety. The highest reduction in root damage, obvious 21 days after planting, was found in systematic random mixtures with at least 50 % of the resistant variety. Small holder farmers in East Africa and elsewhere in the world have local preferences for growing bean varieties in genetic mixtures. These mixtures can be enhanced by the use of resistant varieties in the mixtures to reduce bean fly damage on susceptible popular varieties.
Hierarchical Analytical Approaches for Unraveling the Composition of Proprietary Mixtures
The composition of commercial mixtures including pesticide inert ingredients, aircraft deicers, and aqueous film-forming foam (AFFF) formulations, and by analogy, fracking fluids, are proprietary. Quantitative analytical methodologies can only be developed for mixture components once their identities are known. Because proprietary mixtures may contain volatile and non-volatile components, a hierarchy of analytical methods is often required for the full identification of all proprietary mixture components.
MixGF: spectral probabilities for mixture spectra from more than one peptide.
Wang, Jian; Bourne, Philip E; Bandeira, Nuno
2014-12-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30-390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
MixGF: Spectral Probabilities for Mixture Spectra from more than One Peptide*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30–390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. PMID:25225354
Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca
2016-01-01
The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294
Multi-task feature learning by using trace norm regularization
NASA Astrophysics Data System (ADS)
Jiangmei, Zhang; Binfeng, Yu; Haibo, Ji; Wang, Kunpeng
2017-11-01
Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Tian, Dayong; Lin, Zhifen; Yin, Daqiang; Zhang, Yalei; Kong, Deyang
2012-02-01
Environmental contaminants are usually encountered as mixtures, and many of these mixtures yield synergistic or antagonistic effects attributable to an intracellular chemical reaction that pose a potential threat on ecological systems. However, how atomic charges of individual chemicals determine their intracellular chemical reactions, and then determine the joint effects for mixtures containing reactive toxicants, is not well understood. To address this issue, the joint effects between cyanogenic toxicants and aldehydes on Photobacterium phosphoreum were observed in the present study. Their toxicological joint effects differed from one another. This difference is inherently related to the two atomic charges of the individual chemicals: the oxygen charge of -CHO (O(aldehyde toxicant)) in aldehyde toxicants and the carbon-atom charge of a carbon chain in the cyanogenic toxicant (C(cyanogenic toxicant)). Based on these two atomic charges, the following QSAR (quantitative structure-activity relationship) model was proposed: When (O(aldehyde toxicant) -C(cyanogenic toxicant) )> -0.125, the joint effect of equitoxic binary mixtures at median inhibition (TU, the sum of toxic units) can be calculated as TU = 1.00 ± 0.20; when (O(aldehyde toxicant) -C(cyanogenic toxicant) ) ≤ -0.125, the joint effect can be calculated using TU = - 27.6 x O (aldehyde toxicant) - 5.22 x C (cyanogenic toxicant) - 6.97 (n = 40, r = 0.887, SE = 0.195, F = 140, p < 0.001, q(2) (Loo) = 0.748; SE is the standard error of the regression, F is the F test statistic). The result provides insight into the relationship between the atomic charges and the joint effects for mixtures containing cyanogenic toxicants and aldehydes. This demonstrates that the essence of the joint effects resulting from intracellular chemical reactions depends on the atomic charges of individual chemicals. The present study provides a possible approach for the development of a QSAR model for mixtures containing reactive toxicants based on the atomic charges. Copyright © 2011 SETAC.
Experimentally Derived Mechanical and Flow Properties of Fine-grained Soil Mixtures
NASA Astrophysics Data System (ADS)
Schneider, J.; Peets, C. S.; Flemings, P. B.; Day-Stirrat, R. J.; Germaine, J. T.
2009-12-01
As silt content in mudrocks increases, compressibility linearly decreases and permeability exponentially increases. We prepared mixtures of natural Boston Blue Clay (BBC) and synthetic silt in the ratios of 100:0, 86:14, 68:32, and 50:50, respectively. To recreate natural conditions yet remove variability and soil disturbance, we resedimented all mixtures to a total stress of 100 kPa. We then loaded them to approximately 2.3 MPa in a CRS (constant-rate-of-strain) uniaxial consolidation device. The analyses show that the higher the silt content in the mixture, the stiffer the material is. Compression index as well as liquid and plastic limits linearly decrease with increasing silt content. Vertical permeability increases exponentially with porosity as well as with silt content. Fabric alignment determined through High Resolution X-ray Texture Goniometry (HRXTG) expressed as maximum pole density (m.r.d.) decreases with silt content at a given stress. However, this relationship is not linear instead there are two clusters: the mixtures with higher clay contents (100:0, 84:16) have m.r.d. around 3.9 and mixtures with higher silt contents (68:32, 50:50) have m.r.d. around 2.5. Specific surface area (SSA) measurements show a positive correlation to the total clay content. The amount of silt added to the clay reduces specific surface area, grain orientation, and fabric alignment; thus, it affects compression and fluid flow behavior on a micro- and macroscale. Our results are comparable with previous studies such as kaolinite / silt mixtures (Konrad & Samson [2000], Wagg & Konrad [1990]). We are studying this behavior to understand how fine-grained rocks consolidate. This problem is important to practical and fundamental programs. For example, these sediments can potentially act as either a tight gas reservoir or a seal for hydrocarbons or geologic storage of CO2. This study also provides a systematic approach for developing models of permeability and compressibility behavior needed as inputs for basin modeling.
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Zhao, Rui; Catalano, Paul; DeGruttola, Victor G.; Michor, Franziska
2017-01-01
The dynamics of tumor burden, secreted proteins or other biomarkers over time, is often used to evaluate the effectiveness of therapy and to predict outcomes for patients. Many methods have been proposed to investigate longitudinal trends to better characterize patients and to understand disease progression. However, most approaches assume a homogeneous patient population and a uniform response trajectory over time and across patients. Here, we present a mixture piecewise linear Bayesian hierarchical model, which takes into account both population heterogeneity and nonlinear relationships between biomarkers and time. Simulation results show that our method was able to classify subjects according to their patterns of treatment response with greater than 80% accuracy in the three scenarios tested. We then applied our model to a large randomized controlled phase III clinical trial of multiple myeloma patients. Analysis results suggest that the longitudinal tumor burden trajectories in multiple myeloma patients are heterogeneous and nonlinear, even among patients assigned to the same treatment cohort. In addition, between cohorts, there are distinct differences in terms of the regression parameters and the distributions among categories in the mixture. Those results imply that longitudinal data from clinical trials may harbor unobserved subgroups and nonlinear relationships; accounting for both may be important for analyzing longitudinal data. PMID:28723910
A Diffuse Interface Model with Immiscibility Preservation
Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos
2013-01-01
A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207
Modeling sports highlights using a time-series clustering framework and model interpretation
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay
2005-01-01
In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.
Heterojunctions of model CdTe/CdSe mixtures
van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...
2015-03-18
We report on the strain behavior of compound mixtures of model group II-VI semiconductors. We use the Stillinger-Weber Hamiltonian that we recently introduced, specifically developed to model binary mixtures of group II-VI compounds such as CdTe and CdSe. We also employ molecular dynamics simulations to examine the behavior of thin sheets of material, bilayers of CdTe and CdSe. The lattice mismatch between the two compounds leads to a strong bending of the entire sheet, with about a 0.5 to 1° deflection between neighboring planes. To further analyze bilayer bending, we introduce a simple one-dimensional model and use energy minimization tomore » find the angle of deflection. The analysis is equivalent to a least-squares straight line fit. We consider the effects of bilayers which are asymmetric with respect to the thickness of the CdTe and CdSe parts. We thus learn that the bending can be subdivided into four kinds depending on the compressive/tensile nature of each outer plane of the sheet. We use this approach to directly compare our findings with experimental results on the bending of CdTe/CdSe rods. To reduce the effects of the lattice mismatch we explore diffuse interfaces, where we mix (i.e. alloy) Te and Se, and estimate the strain response.« less
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira
2014-01-13
An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
A New Self-Consistent Field Model of Polymer/Nanoparticle Mixture
NASA Astrophysics Data System (ADS)
Chen, Kang; Li, Hui-Shu; Zhang, Bo-Kai; Li, Jian; Tian, Wen-De
2016-02-01
Field-theoretical method is efficient in predicting assembling structures of polymeric systems. However, it’s challenging to generalize this method to study the polymer/nanoparticle mixture due to its multi-scale nature. Here, we develop a new field-based model which unifies the nanoparticle description with the polymer field within the self-consistent field theory. Instead of being “ensemble-averaged” continuous distribution, the particle density in the final morphology can represent individual particles located at preferred positions. The discreteness of particle density allows our model to properly address the polymer-particle interface and the excluded-volume interaction. We use this model to study the simplest system of nanoparticles immersed in the dense homopolymer solution. The flexibility of tuning the interfacial details allows our model to capture the rich phenomena such as bridging aggregation and depletion attraction. Insights are obtained on the enthalpic and/or entropic origin of the structural variation due to the competition between depletion and interfacial interaction. This approach is readily extendable to the study of more complex polymer-based nanocomposites or biology-related systems, such as dendrimer/drug encapsulation and membrane/particle assembly.
Modeling Concept Dependencies for Event Detection
2014-04-04
Gaussian Mixture Model (GMM). Jiang et al . [8] provide a summary of experiments for TRECVID MED 2010 . They employ low-level features such as SIFT and...event detection literature. Ballan et al . [2] present a method to introduce temporal information for video event detection with a BoW (bag-of-words...approach. Zhou et al . [24] study video event detection by encoding a video with a set of bag of SIFT feature vectors and describe the distribution with a
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
Breakdown and Limit of Continuum Diffusion Velocity for Binary Gas Mixtures from Direct Simulation
NASA Astrophysics Data System (ADS)
Martin, Robert Scott; Najmabadi, Farrokh
2011-05-01
This work investigates the breakdown of the continuum relations for diffusion velocity in inert binary gas mixtures. Values of the relative diffusion velocities for components of a gas mixture may be calculated using of Chapman-Enskog theory and occur not only due to concentration gradients, but also pressure and temperature gradients in the flow as described by Hirschfelder. Because Chapman-Enskog theory employs a linear perturbation around equilibrium, it is expected to break down when the velocity distribution deviates significantly from equilibrium. This breakdown of the overall flow has long been an area of interest in rarefied gas dynamics. By comparing the continuum values to results from Bird's DS2V Monte Carlo code, we propose a new limit on the continuum approach specific to binary gases. To remove the confounding influence of an inconsistent molecular model, we also present the application of the variable hard sphere (VSS) model used in DS2V to the continuum diffusion velocity calculation. Fitting sample asymptotic curves to the breakdown, a limit, Vmax, that is a fraction of an analytically derived limit resulting from the kinetic temperature of the mixture is proposed. With an expected deviation of only 2% between the physical values and continuum calculations within ±Vmax/4, we suggest this as a conservative estimate on the range of applicability for the continuum theory.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
A review of toxicity and mechanisms of individual and mixtures of heavy metals in the environment.
Wu, Xiangyang; Cobbina, Samuel J; Mao, Guanghua; Xu, Hai; Zhang, Zhen; Yang, Liuqing
2016-05-01
The rational for the study was to review the literature on the toxicity and corresponding mechanisms associated with lead (Pb), mercury (Hg), cadmium (Cd), and arsenic (As), individually and as mixtures, in the environment. Heavy metals are ubiquitous and generally persist in the environment, enabling them to biomagnify in the food chain. Living systems most often interact with a cocktail of heavy metals in the environment. Heavy metal exposure to biological systems may lead to oxidation stress which may induce DNA damage, protein modification, lipid peroxidation, and others. In this review, the major mechanism associated with toxicities of individual metals was the generation of reactive oxygen species (ROS). Additionally, toxicities were expressed through depletion of glutathione and bonding to sulfhydryl groups of proteins. Interestingly, a metal like Pb becomes toxic to organisms through the depletion of antioxidants while Cd indirectly generates ROS by its ability to replace iron and copper. ROS generated through exposure to arsenic were associated with many modes of action, and heavy metal mixtures were found to have varied effects on organisms. Many models based on concentration addition (CA) and independent action (IA) have been introduced to help predict toxicities and mechanisms associated with metal mixtures. An integrated model which combines CA and IA was further proposed for evaluating toxicities of non-interactive mixtures. In cases where there are molecular interactions, the toxicogenomic approach was used to predict toxicities. The high-throughput toxicogenomics combines studies in genetics, genome-scale expression, cell and tissue expression, metabolite profiling, and bioinformatics.
Bayesian analysis of Jolly-Seber type models
Matechou, Eleni; Nicholls, Geoff K.; Morgan, Byron J. T.; Collazo, Jaime A.; Lyons, James E.
2016-01-01
We propose the use of finite mixtures of continuous distributions in modelling the process by which new individuals, that arrive in groups, become part of a wildlife population. We demonstrate this approach using a data set of migrating semipalmated sandpipers (Calidris pussila) for which we extend existing stopover models to allow for individuals to have different behaviour in terms of their stopover duration at the site. We demonstrate the use of reversible jump MCMC methods to derive posterior distributions for the model parameters and the models, simultaneously. The algorithm moves between models with different numbers of arrival groups as well as between models with different numbers of behavioural groups. The approach is shown to provide new ecological insights about the stopover behaviour of semipalmated sandpipers but is generally applicable to any population in which animals arrive in groups and potentially exhibit heterogeneity in terms of one or more other processes.
Van Assche, Tom R C; Duerinck, Tim; Van der Perre, Stijn; Baron, Gino V; Denayer, Joeri F M
2014-07-08
Due to the combination of metal ions and organic linkers and the presence of different types of cages and channels, metal-organic frameworks often possess a large structural and chemical heterogeneity, complicating their adsorption behavior, especially for polar-apolar adsorbate mixtures. By allocating isotherms to individual subunits in the structure, the ideal adsorbed solution theory (IAST) can be adjusted to cope with this heterogeneity. The binary adsorption of methanol and n-hexane on HKUST-1 is analyzed using this segregated IAST (SIAST) approach and offers a significant improvement over the standard IAST model predictions. It identifies the various HKUST-1 cages to have a pronounced polar or apolar adsorptive behavior.
Identification and Control of Aircrafts using Multiple Models and Adaptive Critics
NASA Technical Reports Server (NTRS)
Principe, Jose C.
2007-01-01
We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.
Infurna, Frank J; Grimm, Kevin J
2017-12-15
Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Adaptive Gaussian mixture models for pre-screening in GPR data
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
This speaker abstract is part of a session proposal for the 2018 Society of Toxicology annual meeting. I am proposing to speak about the use of new approach methods and data, such as AOPs, in mixtures risk assessment. I have developed an innovative approach called AOP footprint...
A narrow-band k-distribution model with single mixture gas assumption for radiative flows
NASA Astrophysics Data System (ADS)
Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon
2018-06-01
In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.
Doganay-Knapp, Kirsten; Orland, Annika; König, Gabriele M; Knöss, Werner
2018-04-01
Herbal substances and preparations thereof play an important role in healthcare systems worldwide. Due to the variety of these products regarding origin, composition and processing procedures, appropriate methodologies for quality assessment need to be considered. A majority of herbal substances is administered as multicomponent mixtures, especially in the field of Traditional Chinese Medicine and ayurvedic medicine, but also in finished medicinal products. Quality assessment of complex mixtures of herbal substances with conventional methods is challenging. Thus, emphasis of the present work was directed on the development of complementary methods to elucidate the composition of mixtures of herbal substances and finished herbal medicinal products. An indispensable prerequisite for the safe and effective use of herbal medicines is the unequivocal authentication of the medicinal plants used therein. In this context, we investigated the potential of three different PCR-related methods in the characterization and authentication of herbal substances. A multiplex PCR assay and a quantitative PCR (qPCR) assay were established to analyze defined mixtures of the herbal substances Quercus cortex, Juglandis folium, Aristolochiae herba, Matricariae flos and Salviae miltiorrhizae radix et rhizoma and a finished herbal medicinal product. Furthermore, a standard cloning approach using universal primers targeting the ITS region was established in order to allow the investigation of herbal mixtures with unknown content. The cloning approach had some limitations regarding the detection/recovery of the components in defined mixtures of herbal substances, but the complementary use of two sets of universal primer pairs increased the detection of components out of the mixture. While the multiplex PCR did not retrace all components in the defined mixtures of herbal substances, the established qPCR resulted in simultaneous and specific detection of the five target sequences in all defined mixtures. These data indicate that for authentication purposes, complementary PCR-related methods are highly recommendable for the analysis of herbal mixtures in parallel. Copyright © 2018 Elsevier GmbH. All rights reserved.
Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals an...
Risk assessments for mixtures: technical methods commonly used in the United States
A brief (20 minute) talk on the technical approaches used by EPA and other US agencies to assess risks posed by combined exposures to one or more chemicals. The talk systemically reviews the methodologies (whole-mixtures and component-based approaches) that are or have been used ...
EPA announced the availability of the final report, Considerations for Developing a Dosimetry-Based Cumulative Risk Assessment Approach for Mixtures of Environmental Contaminants. This report describes a process that can be used to determine the potential value of develop...
The determination of viscosity at liquid mixtures - Comparison of approaches
NASA Astrophysics Data System (ADS)
Michal, Schmirler; Hana, Netřebská; Jan, Kolínský
2017-09-01
The research of flow field parameters for non-stationary flow of non-Newtonian fluids carried out at the Institute of Fluid Mechanics and Thermodynamics of CTU showed the need for knowledge of determination of the resulting viscosity of a mixture of several liquids. There are several sources for determining viscosity of mixtures. It is possible either to find theoretical relations in the literature or use technical tables based on experimentally measured data. This article focuses on comparing these approaches with an experiment. The experiment was performed by a Rheotest RN 4.1 rotating viscometer produced by the company RHEOTEST Medingen. The research was carried out using a solution of glycerol and water. The research has shown great differences in results in different approaches for determining the viscosity of the liquid mixtures. The result of this paper is to determine the method of viscosity calculation that is closest to the experimental data.
Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui
2015-08-07
Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute.
Vaj, Claudia; Barmaz, Stefania; Sørensen, Peter Borgen; Spurgeon, David; Vighi, Marco
2011-11-01
Mixture toxicity is a real world problem and as such requires risk assessment solutions that can be applied within different geographic regions, across different spatial scales and in situations where the quantity of data available for the assessment varies. Moreover, the need for site specific procedures for assessing ecotoxicological risk for non-target species in non-target ecosystems also has to be recognised. The work presented in the paper addresses the real world effects of pesticide mixtures on natural communities. Initially, the location of risk hotspots is theoretically estimated through exposure modelling and the use of available toxicity data to predict potential community effects. The concept of Concentration Addition (CA) is applied to describe responses resulting from exposure of multiple pesticides The developed and refined exposure models are georeferenced (GIS-based) and include environmental and physico-chemical parameters, and site specific information on pesticide usage and land use. As a test of the risk assessment framework, the procedures have been applied on a suitable study areas, notably the River Meolo basin (Northern Italy), a catchment characterised by intensive agriculture, as well as comparative area for some assessments. Within the studied areas, the risks for individual chemicals and complex mixtures have been assessed on aquatic and terrestrial aboveground and belowground communities. Results from ecological surveys have been used to validate these risk assessment model predictions. Value and limitation of the approaches are described and the possibilities for larger scale applications in risk assessment are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
Large eddy simulation of turbulent cavitating flows
NASA Astrophysics Data System (ADS)
Gnanaskandan, A.; Mahesh, K.
2015-12-01
Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Semi-supervised anomaly detection - towards model-independent searches of new physics
NASA Astrophysics Data System (ADS)
Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu
2012-06-01
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.
Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction
NASA Astrophysics Data System (ADS)
Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob
2018-04-01
Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.
Analysis of the improvement of selenite retention in smectite by adding alumina nanoparticles.
Mayordomo, Natalia; Alonso, Ursula; Missana, Tiziana
2016-12-01
Smectite clay is used as barrier for hazardous waste retention and confinement. It is a powerful material to retain cations, but less effective for retaining anionic species like selenite. This study shows that the addition of a small percentage of γ-Al 2 O 3 nanoparticles to smectite significantly improves selenite sorption. γ-Al 2 O 3 nanoparticles provide high surface area and positively charged surface sites within a wide range of pH, since their point of zero charge is at pH8-9. An addition of 20wt% of γ-Al 2 O 3 to smectite is sufficient to approach the sorption capacity of pure alumina. To analyze the sorption behavior of the smectite/oxide mixtures, a nonelectrostatic surface complexation model was considered, accounting for the surface complexation of HSeO 3 - and SeO 3 2- , the anion competition, and the formation of surface ternary complexes with major cations present in the solution. Selenite sorption in mixtures was satisfactorily described with the surface parameters and complexation constants defined for the pure systems, accounting only for the mixture weight fractions. Sorption in mixtures was additive despite the particle heteroaggregation observed in previous stability studies carried out on smectite/γ-Al 2 O 3 mixtures. Copyright © 2016 Elsevier B.V. All rights reserved.
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
Statistical mechanics of homogeneous partly pinned fluid systems.
Krakoviack, Vincent
2010-12-01
The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María
2009-02-01
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.
Koss, Kalsea J.; George, Melissa R. W.; Davies, Patrick T.; Cicchetti, Dante; Cummings, E. Mark; Sturge-Apple, Melissa L.
2013-01-01
Examining children’s physiological functioning is an important direction for understanding the links between interparental conflict and child adjustment. Utilizing growth mixture modeling, the present study examined children’s cortisol reactivity patterns in response to a marital dispute. Analyses revealed three different patterns of cortisol responses, consistent with both a sensitization and an attenuation hypothesis. Child-rearing disagreements and perceived threat were associated with children exhibiting a rising cortisol pattern whereas destructive conflict was related to children displaying a flat pattern. Physiologically rising patterns were also linked with emotional insecurity and internalizing and externalizing behaviors. Results supported a sensitization pattern of responses as maladaptive for children in response to marital conflict with evidence also linking an attenuation pattern with risk. The present study supports children’s adrenocortical functioning as one mechanism through which interparental conflict is related to children’s coping responses and psychological adjustment. PMID:22545835
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
Mixture Model and MDSDCA for Textual Data
NASA Astrophysics Data System (ADS)
Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît
E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.
ERIC Educational Resources Information Center
Quirk, Matthew; Grimm, Ryan; Furlong, Michael J.; Nylund-Gibson, Karen; Swami, Sruthi
2016-01-01
This study utilized latent class analysis (LCA) to identify 5 discernible profiles of Latino children's (N = 1,253) social-emotional, physical, and cognitive school readiness at the time of kindergarten entry. In addition, a growth mixture modeling (GMM) approach was used to identify 3 unique literacy achievement trajectories, across Grades 2-5,…