Sample records for mixture modeling framework

  1. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  2. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  3. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  4. New theoretical framework for designing nonionic surfactant mixtures that exhibit a desired adsorption kinetics behavior.

    PubMed

    Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel

    2010-12-21

    How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.

  5. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  6. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  7. A Generalized Mixture Framework for Multi-label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069

  8. Defining an additivity framework for mixture research in inducible whole-cell biosensors

    NASA Astrophysics Data System (ADS)

    Martin-Betancor, K.; Ritz, C.; Fernández-Piñas, F.; Leganés, F.; Rodea-Palomares, I.

    2015-11-01

    A novel additivity framework for mixture effect modelling in the context of whole cell inducible biosensors has been mathematically developed and implemented in R. The proposed method is a multivariate extension of the effective dose (EDp) concept. Specifically, the extension accounts for differential maximal effects among analytes and response inhibition beyond the maximum permissive concentrations. This allows a multivariate extension of Loewe additivity, enabling direct application in a biphasic dose-response framework. The proposed additivity definition was validated, and its applicability illustrated by studying the response of the cyanobacterial biosensor Synechococcus elongatus PCC 7942 pBG2120 to binary mixtures of Zn, Cu, Cd, Ag, Co and Hg. The novel method allowed by the first time to model complete dose-response profiles of an inducible whole cell biosensor to mixtures. In addition, the approach also allowed identification and quantification of departures from additivity (interactions) among analytes. The biosensor was found to respond in a near additive way to heavy metal mixtures except when Hg, Co and Ag were present, in which case strong interactions occurred. The method is a useful contribution for the whole cell biosensors discipline and related areas allowing to perform appropriate assessment of mixture effects in non-monotonic dose-response frameworks

  9. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  10. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  11. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  12. The utility of estimating population-level trajectories of terminal wellbeing decline within a growth mixture modelling framework.

    PubMed

    Burns, R A; Byles, J; Magliano, D J; Mitchell, P; Anstey, K J

    2015-03-01

    Mortality-related decline has been identified across multiple domains of human functioning, including mental health and wellbeing. The current study utilised a growth mixture modelling framework to establish whether a single population-level trajectory best describes mortality-related changes in both wellbeing and mental health, or whether subpopulations report quite different mortality-related changes. Participants were older-aged (M = 69.59 years; SD = 8.08 years) deceased females (N = 1,862) from the dynamic analyses to optimise ageing (DYNOPTA) project. Growth mixture models analysed participants' responses on measures of mental health and wellbeing for up to 16 years from death. Multi-level models confirmed overall terminal decline and terminal drop in both mental health and wellbeing. However, modelling data from the same participants within a latent class growth mixture framework indicated that most participants reported stability in mental health (90.3 %) and wellbeing (89.0 %) in the years preceding death. Whilst confirming other population-level analyses which support terminal decline and drop hypotheses in both mental health and wellbeing, we subsequently identified that most of this effect is driven by a small, but significant minority of the population. Instead, most individuals report stable levels of mental health and wellbeing in the years preceding death.

  13. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    ERIC Educational Resources Information Center

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  14. Modeling sports highlights using a time-series clustering framework and model interpretation

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay

    2005-01-01

    In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.

  15. BiomeNet: A Bayesian Model for Inference of Metabolic Divergence among Microbial Communities

    PubMed Central

    Chipman, Hugh; Gu, Hong; Bielawski, Joseph P.

    2014-01-01

    Metagenomics yields enormous numbers of microbial sequences that can be assigned a metabolic function. Using such data to infer community-level metabolic divergence is hindered by the lack of a suitable statistical framework. Here, we describe a novel hierarchical Bayesian model, called BiomeNet (Bayesian inference of metabolic networks), for inferring differential prevalence of metabolic subnetworks among microbial communities. To infer the structure of community-level metabolic interactions, BiomeNet applies a mixed-membership modelling framework to enzyme abundance information. The basic idea is that the mixture components of the model (metabolic reactions, subnetworks, and networks) are shared across all groups (microbiome samples), but the mixture proportions vary from group to group. Through this framework, the model can capture nested structures within the data. BiomeNet is unique in modeling each metagenome sample as a mixture of complex metabolic systems (metabosystems). The metabosystems are composed of mixtures of tightly connected metabolic subnetworks. BiomeNet differs from other unsupervised methods by allowing researchers to discriminate groups of samples through the metabolic patterns it discovers in the data, and by providing a framework for interpreting them. We describe a collapsed Gibbs sampler for inference of the mixture weights under BiomeNet, and we use simulation to validate the inference algorithm. Application of BiomeNet to human gut metagenomes revealed a metabosystem with greater prevalence among inflammatory bowel disease (IBD) patients. Based on the discriminatory subnetworks for this metabosystem, we inferred that the community is likely to be closely associated with the human gut epithelium, resistant to dietary interventions, and interfere with human uptake of an antioxidant connected to IBD. Because this metabosystem has a greater capacity to exploit host-associated glycans, we speculate that IBD-associated communities might arise from opportunist growth of bacteria that can circumvent the host's nutrient-based mechanism for bacterial partner selection. PMID:25412107

  16. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  17. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  18. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  19. Modeling of active transmembrane transport in a mixture theory framework.

    PubMed

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T

    2010-05-01

    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  20. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  1. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  2. The Cusp Catastrophe Model as Cross-Sectional and Longitudinal Mixture Structural Equation Models

    PubMed Central

    Chow, Sy-Miin; Witkiewitz, Katie; Grasman, Raoul P. P. P.; Maisto, Stephen A.

    2015-01-01

    Catastrophe theory (Thom, 1972, 1993) is the study of the many ways in which continuous changes in a system’s parameters can result in discontinuous changes in one or several outcome variables of interest. Catastrophe theory–inspired models have been used to represent a variety of change phenomena in the realm of social and behavioral sciences. Despite their promise, widespread applications of catastrophe models have been impeded, in part, by difficulties in performing model fitting and model comparison procedures. We propose a new modeling framework for testing one kind of catastrophe model — the cusp catastrophe model — as a mixture structural equation model (MSEM) when cross-sectional data are available; or alternatively, as an MSEM with regime-switching (MSEM-RS) when longitudinal panel data are available. The proposed models and the advantages offered by this alternative modeling framework are illustrated using two empirical examples and a simulation study. PMID:25822209

  3. Computational modeling of electrically-driven deposition of ionized polydisperse particulate powder mixtures in advanced manufacturing processes

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.

    2017-07-01

    A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.

  4. A Bayesian framework based on a Gaussian mixture model and radial-basis-function Fisher discriminant analysis (BayGmmKda V1.1) for spatial prediction of floods

    NASA Astrophysics Data System (ADS)

    Tien Bui, Dieu; Hoang, Nhat-Duc

    2017-09-01

    In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.

  5. Correlation of foodstuffs with ethanol-water mixtures with regard to the solubility of migrants from food contact materials.

    PubMed

    Seiler, Annika; Bach, Aurélie; Driffield, Malcolm; Paseiro Losada, Perfecto; Mercea, Peter; Tosa, Valer; Franz, Roland

    2014-01-01

    Today most foods are available in a packed form. During storage, the migration of chemical substances from food packaging materials into food may occur and may therefore be a potential source of consumer exposure. To protect the consumer, standard migration tests are laid down in Regulation (EU) No. 10/2011. When using those migration tests and applying additional conservative conventions, estimated exposure is linked with large uncertainties including a certain margin of safety. Thus the research project FACET was initiated within the 7th Framework Programme of the European Commission with the aim of developing a probabilistic migration modelling framework which allows one (1) to calculate migration into foods under real conditions of use; and (2) to deliver realistic concentration estimates for consumer exposure modelling for complex packaging materials (including multi-material multilayer structures). The aim was to carry out within the framework of the FACET project a comprehensive systematic study on the solubility behaviour of foodstuffs for potentially migrating organic chemicals. Therefore a rapid and convenient method was established to obtain partition coefficients between polymer and food, KP/F. With this method approximately 700 time-dependent kinetic experiments from spiked polyethylene films were performed using model migrants, foods and ethanol-water mixtures. The partition coefficients of migrants between polymer and food (KP/F) were compared with those obtained using ethanol-water mixtures (KP/F's) to investigate whether an allocation of food groups with common migration behaviour to certain ethanol-water mixtures could be made. These studies have confirmed that the solubility of a migrant is mainly dependent on the fat content in the food and on the ethanol concentration of ethanol-water mixtures. Therefore dissolution properties of generic food groups for migrants can be assigned to those of ethanol-water mixtures. All foodstuffs (including dry foods) when allocated to FACET model food group codes can be classified into a reduced number of food categories each represented by a corresponding ethanol-water equivalency.

  6. A mixed model framework for teratology studies.

    PubMed

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  7. Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation

    PubMed Central

    Froyen, Vicky; Feldman, Jacob; Singh, Manish

    2015-01-01

    We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548

  8. Comparison of detailed and reduced kinetics mechanisms of silane oxidation in the basis of detonation wave structure problem

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Tropin, D. A.; Fomin, P. A.

    2018-03-01

    The paper deals with the problem of the structure of detonation waves in the silane-air mixture within the framework of mathematical model of a nonequilibrium gas dynamics. Detailed kinetic scheme of silane oxidation as well as the newly developed reduced kinetic model of detonation combustion of silane are used. On its basis the detonation wave (DW) structure in stoichiometric silane - air mixture and dependences of Chapman-Jouguet parameters of mixture on stoichiometric ratio between the fuel (silane) and an oxidizer (air) were obtained.

  9. Mixture models for detecting differentially expressed genes in microarrays.

    PubMed

    Jones, Liat Ben-Tovim; Bean, Richard; McLachlan, Geoffrey J; Zhu, Justin Xi

    2006-10-01

    An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

  10. Dynamic Infinite Mixed-Membership Stochastic Blockmodel.

    PubMed

    Fan, Xuhui; Cao, Longbing; Xu, Richard Yi Da

    2015-09-01

    Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.

  11. Statistical mechanics of binary mixture adsorption in metal-organic frameworks in the osmotic ensemble.

    PubMed

    Dunne, Lawrence J; Manos, George

    2018-03-13

    Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO 2 and CH 4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO 2 and CH 4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).

  12. Statistical mechanics of binary mixture adsorption in metal-organic frameworks in the osmotic ensemble

    NASA Astrophysics Data System (ADS)

    Dunne, Lawrence J.; Manos, George

    2018-03-01

    Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO2 and CH4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO2 and CH4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes. This article is part of the theme issue `Modern theoretical chemistry'.

  13. EFFECTS-BASED CUMULATIVE RISK ASSESSMENT IN A LOW-INCOME URBAN COMMUNITY NEAR A SUPERFUND SITE

    EPA Science Inventory

    We will introduce into the cumulative risk assessment framework novel methods for non-cancer risk assessment, techniques for dose-response modeling that extend insights from chemical mixtures frameworks to non-chemical stressors, multilevel statistical methods used to address ...

  14. Methods of Sparse Modeling and Dimensionality Reduction to Deal with Big Data

    DTIC Science & Technology

    2015-04-01

    supervised learning (c). Our framework consists of two separate phases: (a) first find an initial space in an unsupervised manner; then (b) utilize label...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, 2) a supervised dimension reduction...model that can learn thousands of topics from a large set of documents and infer the topic mixture of each document, (i) a method of supervised

  15. Extending the Distributed Lag Model framework to handle chemical mixtures.

    PubMed

    Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris

    2017-07-01

    Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Mattick, Stephen; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2008-01-01

    A) Advanced Gas/Liquid Framework with Real Fluids Property Routines: I. A multi-fluid formulation in the preconditioned CRUNCH CFD(Registered TradeMark) code developed where a mixture of liquid and gases can be specified: a) Various options for Equation of state specification available (from simplified ideal fluid mixtures, to real fluid EOS such as SRK or BWR models). b) Vaporization of liquids driven by pressure value relative to vapor pressure and combustion of vapors allowed. c) Extensive validation has been undertaken. II. Currently working on developing primary break-up models and surface tension effects for more rigorous phase-change modeling and interfacial dynamics B) Framework Applied to Run-time Tanks at Ground Test Facilities C) Framework Used For J-2 Upper Stage Tank Modeling: 1) NASA MSFC tank pressurization: a) Hydrogen and oxygen tank pre-press, repress and draining being modeled at NASA MSFC. 2) NASA AMES tank safety effort a) liquid hydrogen and oxygen are separated by a baffle in the J-2 tank. We are modeling pressure rise and possible combustion if a hole develops in the baffle and liquid hydrogen leaks into the oxygen tank. Tank pressure rise rates simulated and risk of combustion evaluated.

  17. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  18. Mixture Hidden Markov Models in Finance Research

    NASA Astrophysics Data System (ADS)

    Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia

    Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.

  19. A Mixture Modeling Framework for Differential Analysis of High-Throughput Data

    PubMed Central

    Taslim, Cenny; Lin, Shili

    2014-01-01

    The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284

  20. Implementation and Validation of the Viscoelastic Continuum Damage Theory for Asphalt Mixture and Pavement Analysis in Brazil

    NASA Astrophysics Data System (ADS)

    Nascimento, Luis Alberto Herrmann do

    This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.

  1. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  2. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  3. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  4. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  5. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  6. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  7. Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids

    NASA Astrophysics Data System (ADS)

    Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk

    2018-03-01

    The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.

  8. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    USGS Publications Warehouse

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.

  9. Pharmacokinetic Modeling of JP-8 Jet Fuel Components: II. A Conceptual Framework

    DTIC Science & Technology

    2003-12-01

    example, a single type of (simple) binary interaction between 300 components would require the specification of some 105 interaction coefficients . One...individual substances, via binary mechanisms, is enough to predict the interactions present in the mixture. Secondly, complex mixtures can often be...approximated as pseudo- binary systems, consisting of the compound of interest plus a single interacting complex vehicle with well-defined, composite

  10. Joint model-based clustering of nonlinear longitudinal trajectories and associated time-to-event data analysis, linked by latent class membership: with application to AIDS clinical studies.

    PubMed

    Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam

    2017-10-27

    Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.

  11. Damage/fault diagnosis in an operating wind turbine under uncertainty via a vibration response Gaussian mixture random coefficient model based framework

    NASA Astrophysics Data System (ADS)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-07-01

    The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.

  12. Regulatory assessment of chemical mixtures: Requirements, current approaches and future perspectives.

    PubMed

    Kienzler, Aude; Bopp, Stephanie K; van der Linden, Sander; Berggren, Elisabet; Worth, Andrew

    2016-10-01

    This paper reviews regulatory requirements and recent case studies to illustrate how the risk assessment (RA) of chemical mixtures is conducted, considering both the effects on human health and on the environment. A broad range of chemicals, regulations and RA methodologies are covered, in order to identify mixtures of concern, gaps in the regulatory framework, data needs, and further work to be carried out. Also the current and potential future use of novel tools (Adverse Outcome Pathways, in silico tools, toxicokinetic modelling, etc.) in the RA of combined effects were reviewed. The assumptions made in the RA, predictive model specifications and the choice of toxic reference values can greatly influence the assessment outcome, and should therefore be specifically justified. Novel tools could support mixture RA mainly by providing a better understanding of the underlying mechanisms of combined effects. Nevertheless, their use is currently limited because of a lack of guidance, data, and expertise. More guidance is needed to facilitate their application. As far as the authors are aware, no prospective RA concerning chemicals related to various regulatory sectors has been performed to date, even though numerous chemicals are registered under several regulatory frameworks. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Prediction of molecular separation of polar-apolar mixtures on heterogeneous metal-organic frameworks: HKUST-1.

    PubMed

    Van Assche, Tom R C; Duerinck, Tim; Van der Perre, Stijn; Baron, Gino V; Denayer, Joeri F M

    2014-07-08

    Due to the combination of metal ions and organic linkers and the presence of different types of cages and channels, metal-organic frameworks often possess a large structural and chemical heterogeneity, complicating their adsorption behavior, especially for polar-apolar adsorbate mixtures. By allocating isotherms to individual subunits in the structure, the ideal adsorbed solution theory (IAST) can be adjusted to cope with this heterogeneity. The binary adsorption of methanol and n-hexane on HKUST-1 is analyzed using this segregated IAST (SIAST) approach and offers a significant improvement over the standard IAST model predictions. It identifies the various HKUST-1 cages to have a pronounced polar or apolar adsorptive behavior.

  14. Assessment of two-phase flow on the chemical alteration and sealing of leakage pathways in cemented wellbores

    DOE PAGES

    Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue; ...

    2018-01-08

    Wellbore leakage tops the list of perceived risks to the long-term geologic storage of CO 2, because wells provide a direct path between the CO 2 storage reservoir and the atmosphere. In this paper, we have coupled a two-phase flow model with our original framework that combined models for reactive transport of carbonated brine, geochemistry of reacting cement, and geomechanics to predict the permeability evolution of cement fractures. Additionally, this makes the framework suitable for field conditions in geological storage sites, permitting simulation of contact between cement and mixtures of brine and supercritical CO 2. Due to lack of conclusivemore » experimental data, we tried both linear and Corey relative permeability models to simulate flow of the two phases in cement fractures. The model also includes two options to account for the inconsistent experimental observations regarding cement reactivity with two-phase CO 2-brine mixtures. One option assumes that the reactive surface area is independent of the brine saturation and the second option assumes that the reactive surface area is proportional to the brine saturation. We have applied the model to predict the extent of cement alteration, the conditions under which fractures seal, the time it takes to seal a fracture, and the leakage rates of CO 2 and brine when damage zones in the wellbore are exposed to two-phase CO 2-brine mixtures. Initial brine residence time and the initial fracture aperture are critical parameters that affect the fracture sealing behavior. We also evaluated the importance of the model assumptions regarding relative permeability and cement reactivity. These results illustrate the need to understand how mixtures of carbon dioxide and brine flow through fractures and react with cement to make reasonable predictions regarding well integrity. For example, a reduction in the cement reactivity with two-phase CO 2-brine mixture can not only significantly increase the sealing time for fractures but may also prevent fracture sealing.« less

  15. Assessment of two-phase flow on the chemical alteration and sealing of leakage pathways in cemented wellbores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyer, Jaisree; Walsh, Stuart D. C.; Hao, Yue

    Wellbore leakage tops the list of perceived risks to the long-term geologic storage of CO 2, because wells provide a direct path between the CO 2 storage reservoir and the atmosphere. In this paper, we have coupled a two-phase flow model with our original framework that combined models for reactive transport of carbonated brine, geochemistry of reacting cement, and geomechanics to predict the permeability evolution of cement fractures. Additionally, this makes the framework suitable for field conditions in geological storage sites, permitting simulation of contact between cement and mixtures of brine and supercritical CO 2. Due to lack of conclusivemore » experimental data, we tried both linear and Corey relative permeability models to simulate flow of the two phases in cement fractures. The model also includes two options to account for the inconsistent experimental observations regarding cement reactivity with two-phase CO 2-brine mixtures. One option assumes that the reactive surface area is independent of the brine saturation and the second option assumes that the reactive surface area is proportional to the brine saturation. We have applied the model to predict the extent of cement alteration, the conditions under which fractures seal, the time it takes to seal a fracture, and the leakage rates of CO 2 and brine when damage zones in the wellbore are exposed to two-phase CO 2-brine mixtures. Initial brine residence time and the initial fracture aperture are critical parameters that affect the fracture sealing behavior. We also evaluated the importance of the model assumptions regarding relative permeability and cement reactivity. These results illustrate the need to understand how mixtures of carbon dioxide and brine flow through fractures and react with cement to make reasonable predictions regarding well integrity. For example, a reduction in the cement reactivity with two-phase CO 2-brine mixture can not only significantly increase the sealing time for fractures but may also prevent fracture sealing.« less

  16. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  18. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  19. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  20. An introduction to mixture item response theory models.

    PubMed

    De Ayala, R J; Santiago, S Y

    2017-02-01

    Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics

    NASA Astrophysics Data System (ADS)

    Zaharieva, Roussislava; Hanagud, Sathya

    2009-06-01

    Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.

  2. Large-scale monitoring of shorebird populations using count data and N-mixture models: Black Oystercatcher (Haematopus bachmani) surveys by land and sea

    USGS Publications Warehouse

    Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.

    2012-01-01

    Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.

  3. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  4. A Physically Based Framework for Modelling the Organic Fractionation of Sea Spray Aerosol from Bubble Film Langmuir Equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda

    2014-12-19

    The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less

  5. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    PubMed

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  6. Functional mixture regression.

    PubMed

    Yao, Fang; Fu, Yuejiao; Lee, Thomas C M

    2011-04-01

    In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.

  7. Metal-organic frameworks for adsorption and separation of noble gases

    DOEpatents

    Allendorf, Mark D.; Greathouse, Jeffery A.; Staiger, Chad

    2017-05-30

    A method including exposing a gas mixture comprising a noble gas to a metal organic framework (MOF), including an organic electron donor and an adsorbent bed operable to adsorb a noble gas from a mixture of gases, the adsorbent bed including a metal organic framework (MOF) including an organic electron donor.

  8. Research on Bayes matting algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang

    2015-12-01

    The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.

  9. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  10. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstratemore » that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.« less

  11. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures.

    PubMed

    Dunn, Nicholas J H; Noid, W G

    2016-05-28

    This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.

  12. Purification of metal-organic framework materials

    DOEpatents

    Farha, Omar K.; Hupp, Joseph T.

    2012-12-04

    A method of purification of a solid mixture of a metal-organic framework (MOF) material and an unwanted second material by disposing the solid mixture in a liquid separation medium having a density that lies between those of the wanted MOF material and the unwanted material, whereby the solid mixture separates by density differences into a fraction of wanted MOF material and another fraction of unwanted material.

  13. Purification of metal-organic framework materials

    DOEpatents

    Farha, Omar K.; Hupp, Joseph T.

    2015-06-30

    A method of purification of a solid mixture of a metal-organic framework (MOF) material and an unwanted second material by disposing the solid mixture in a liquid separation medium having a density that lies between those of the wanted MOF material and the unwanted material, whereby the solid mixture separates by density differences into a fraction of wanted MOF material and another fraction of unwanted material.

  14. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  15. A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework

    PubMed Central

    Aslam, Muhammad; Tahir, Muhammad; Hussain, Zawar; Al-Zahrani, Bander

    2015-01-01

    To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given. PMID:25993475

  16. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  17. Bayesian Variable Selection for Hierarchical Gene-Environment and Gene-Gene Interactions

    PubMed Central

    Liu, Changlu; Ma, Jianzhong; Amos, Christopher I.

    2014-01-01

    We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions and gene by environment interactions in the same model. Our approach incorporates the natural hierarchical structure between the main effects and interaction effects into a mixture model, such that our methods tend to remove the irrelevant interaction effects more effectively, resulting in more robust and parsimonious models. We consider both strong and weak hierarchical models. For a strong hierarchical model, both of the main effects between interacting factors must be present for the interactions to be considered in the model development, while for a weak hierarchical model, only one of the two main effects is required to be present for the interaction to be evaluated. Our simulation results show that the proposed strong and weak hierarchical mixture models work well in controlling false positive rates and provide a powerful approach for identifying the predisposing effects and interactions in gene-environment interaction studies, in comparison with the naive model that does not impose this hierarchical constraint in most of the scenarios simulated. We illustrated our approach using data for lung cancer and cutaneous melanoma. PMID:25154630

  18. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  20. Diffraction of a Shock Wave on a Wedge in a Dusty Gas

    NASA Astrophysics Data System (ADS)

    Surov, V. S.

    2017-09-01

    Within the framework of one- and multivelocity dusty-gas models, the author has investigated, on a curvilinear grid, flow in reflection of a shock wave from the wedge-shaped surface in an air-droplet mixture using the Godunov method with a linearized Riemannian solver.

  1. Physiologically based pharmacokinetic modeling of tea catechin mixture in rats and humans.

    PubMed

    Law, Francis C P; Yao, Meicun; Bi, Hui-Chang; Lam, Stephen

    2017-06-01

    Although green tea ( Camellia sinensis) (GT) contains a large number of polyphenolic compounds with anti-oxidative and anti-proliferative activities, little is known of the pharmacokinetics and tissue dose of tea catechins (TCs) as a chemical mixture in humans. The objectives of this study were to develop and validate a physiologically based pharmacokinetic (PBPK) model of tea catechin mixture (TCM) in rats and humans, and to predict an integrated or total concentration of TCM in the plasma of humans after consuming GT or Polyphenon E (PE). To this end, a PBPK model of epigallocatechin gallate (EGCg) consisting of 13 first-order, blood flow-limited tissue compartments was first developed in rats. The rat model was scaled up to humans by replacing its physiological parameters, pharmacokinetic parameters and tissue/blood partition coefficients (PCs) with human-specific values. Both rat and human EGCg models were then extrapolated to other TCs by substituting its physicochemical parameters, pharmacokinetic parameters, and PCs with catechin-specific values. Finally, a PBPK model of TCM was constructed by linking three rat (or human) tea catechin models together without including a description for pharmacokinetic interaction between the TCs. The mixture PBPK model accurately predicted the pharmacokinetic behaviors of three individual TCs in the plasma of rats and humans after GT or PE consumption. Model-predicted total TCM concentration in the plasma was linearly related to the dose consumed by humans. The mixture PBPK model is able to translate an external dose of TCM into internal target tissue doses for future safety assessment and dose-response analysis studies in humans. The modeling framework as described in this paper is also applicable to the bioactive chemical in other plant-based health products.

  2. Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.

    PubMed

    Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K

    2004-07-01

    Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.

  3. Relative resolution: A hybrid formalism for fluid mixtures.

    PubMed

    Chaimovich, Aviel; Peter, Christine; Kremer, Kurt

    2015-12-28

    We show here that molecular resolution is inherently hybrid in terms of relative separation. While nearest neighbors are characterized by a fine-grained (geometrically detailed) model, other neighbors are characterized by a coarse-grained (isotropically simplified) model. We notably present an analytical expression for relating the two models via energy conservation. This hybrid framework is correspondingly capable of retrieving the structural and thermal behavior of various multi-component and multi-phase fluids across state space.

  4. Relative resolution: A hybrid formalism for fluid mixtures

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Peter, Christine; Kremer, Kurt

    2015-12-01

    We show here that molecular resolution is inherently hybrid in terms of relative separation. While nearest neighbors are characterized by a fine-grained (geometrically detailed) model, other neighbors are characterized by a coarse-grained (isotropically simplified) model. We notably present an analytical expression for relating the two models via energy conservation. This hybrid framework is correspondingly capable of retrieving the structural and thermal behavior of various multi-component and multi-phase fluids across state space.

  5. A framework for evaluating mixture analysis algorithms

    NASA Astrophysics Data System (ADS)

    Dasaratha, Sridhar; Vignesh, T. S.; Shanmukh, Sarat; Yarra, Malathi; Botonjic-Sehic, Edita; Grassi, James; Boudries, Hacene; Freeman, Ivan; Lee, Young K.; Sutherland, Scott

    2010-04-01

    In recent years, several sensing devices capable of identifying unknown chemical and biological substances have been commercialized. The success of these devices in analyzing real world samples is dependent on the ability of the on-board identification algorithm to de-convolve spectra of substances that are mixtures. To develop effective de-convolution algorithms, it is critical to characterize the relationship between the spectral features of a substance and its probability of detection within a mixture, as these features may be similar to or overlap with other substances in the mixture and in the library. While it has been recognized that these aspects pose challenges to mixture analysis, a systematic effort to quantify spectral characteristics and their impact, is generally lacking. In this paper, we propose metrics that can be used to quantify these spectral features. Some of these metrics, such as a modification of variance inflation factor, are derived from classical statistical measures used in regression diagnostics. We demonstrate that these metrics can be correlated to the accuracy of the substance's identification in a mixture. We also develop a framework for characterizing mixture analysis algorithms, using these metrics. Experimental results are then provided to show the application of this framework to the evaluation of various algorithms, including one that has been developed for a commercial device. The illustration is based on synthetic mixtures that are created from pure component Raman spectra measured on a portable device.

  6. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  7. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  8. Production of single-walled carbon nanotube grids

    DOEpatents

    Hauge, Robert H; Xu, Ya-Qiong; Pheasant, Sean

    2013-12-03

    A method of forming a nanotube grid includes placing a plurality of catalyst nanoparticles on a grid framework, contacting the catalyst nanoparticles with a gas mixture that includes hydrogen and a carbon source in a reaction chamber, forming an activated gas from the gas mixture, heating the grid framework and activated gas, and controlling a growth time to generate a single-wall carbon nanotube array radially about the grid framework. A filter membrane may be produced by this method.

  9. Mixture Distributions for Modeling Lead Time Demand in Coordinated Supply Chains

    DTIC Science & Technology

    2014-04-30

    International Journal of Production Economics , 101...backorder price discount. International Journal of Production Economics , 111, 118–128. McClain, J. O., & Thomas, L. J. (1985). Operations management...2008). Using the inventory-theoretic framework to determine cost-minimizing supply strategies in a stochastic setting. International Journal of Production Economics ,

  10. Conceptual model for assessing criteria air pollutants in a multipollutant context: A modified adverse outcome pathway approach.

    PubMed

    Buckley, Barbara; Farraj, Aimen

    2015-09-01

    Air pollution consists of a complex mixture of particulate and gaseous components. Individual criteria and other hazardous air pollutants have been linked to adverse respiratory and cardiovascular health outcomes. However, assessing risk of air pollutant mixtures is difficult since components are present in different combinations and concentrations in ambient air. Recent mechanistic studies have limited utility because of the inability to link measured changes to adverse outcomes that are relevant to risk assessment. New approaches are needed to address this challenge. The purpose of this manuscript is to describe a conceptual model, based on the adverse outcome pathway approach, which connects initiating events at the cellular and molecular level to population-wide impacts. This may facilitate hazard assessment of air pollution mixtures. In the case reports presented here, airway hyperresponsiveness and endothelial dysfunction are measurable endpoints that serve to integrate the effects of individual criteria air pollutants found in inhaled mixtures. This approach incorporates information from experimental and observational studies into a sequential series of higher order effects. The proposed model has the potential to facilitate multipollutant risk assessment by providing a framework that can be used to converge the effects of air pollutants in light of common underlying mechanisms. This approach may provide a ready-to-use tool to facilitate evaluation of health effects resulting from exposure to air pollution mixtures. Published by Elsevier Ireland Ltd.

  11. Mixture toxicity revisited from a toxicogenomic perspective.

    PubMed

    Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I

    2012-03-06

    The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to establish quantitative relationships between dose and time dependency of responses and (ii) to adopt mixture toxicity models. Moreover, (iii) utilization of novel bioinformatic tools and (iv) stress response concepts could be productive to translate multiple responses into hypotheses on the relationships between general stress and specific toxicity reactions of organisms.

  12. A Bayesian mixture model for chromatin interaction data.

    PubMed

    Niu, Liang; Lin, Shili

    2015-02-01

    Chromatin interactions mediated by a particular protein are of interest for studying gene regulation, especially the regulation of genes that are associated with, or known to be causative of, a disease. A recent molecular technique, Chromatin interaction analysis by paired-end tag sequencing (ChIA-PET), that uses chromatin immunoprecipitation (ChIP) and high throughput paired-end sequencing, is able to detect such chromatin interactions genomewide. However, ChIA-PET may generate noise (i.e., pairings of DNA fragments by random chance) in addition to true signal (i.e., pairings of DNA fragments by interactions). In this paper, we propose MC_DIST based on a mixture modeling framework to identify true chromatin interactions from ChIA-PET count data (counts of DNA fragment pairs). The model is cast into a Bayesian framework to take into account the dependency among the data and the available information on protein binding sites and gene promoters to reduce false positives. A simulation study showed that MC_DIST outperforms the previously proposed hypergeometric model in terms of both power and type I error rate. A real data study showed that MC_DIST may identify potential chromatin interactions between protein binding sites and gene promoters that may be missed by the hypergeometric model. An R package implementing the MC_DIST model is available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM.

  13. Pore-scale modeling of phase change in porous media

    NASA Astrophysics Data System (ADS)

    Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing

    2017-11-01

    One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.

  14. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  15. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising

    PubMed Central

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887

  16. Exclusion probabilities and likelihood ratios with applications to mixtures.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  17. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  18. The structure and properties of a simple model mixture of amphiphilic molecules and ions at a solid surface

    NASA Astrophysics Data System (ADS)

    Pizio, O.; Sokołowski, S.; Sokołowska, Z.

    2014-05-01

    We investigate microscopic structure, adsorption, and electric properties of a mixture that consists of amphiphilic molecules and charged hard spheres in contact with uncharged or charged solid surfaces. The amphiphilic molecules are modeled as spheres composed of attractive and repulsive parts. The electrolyte component of the mixture is considered in the framework of the restricted primitive model (RPM). The system is studied using a density functional theory that combines fundamental measure theory for hard sphere mixtures, weighted density approach for inhomogeneous charged hard spheres, and a mean-field approximation to describe anisotropic interactions. Our principal focus is in exploring the effects brought by the presence of ions on the distribution of amphiphilic particles at the wall, as well as the effects of amphiphilic molecules on the electric double layer formed at solid surface. In particular, we have found that under certain thermodynamic conditions a long-range translational and orientational order can develop. The presence of amphiphiles produces changes of the shape of the differential capacitance from symmetric or non-symmetric bell-like to camel-like. Moreover, for some systems the value of the potential of the zero charge is non-zero, in contrast to the RPM at a charged surface.

  19. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham

    Year 1 Objectives (August 2016 – December 2016) – The original Independence model is a sequentially regressed set of parameters from numerous data sets in the Aspen Plus modeling framework. The immediate goal with the basic data model is to collect and evaluate those data sets relevant to the thermodynamic submodels (pure substance heat capacity, solvent mixture heat capacity, loaded solvent heat capacities, and volatility data). These data are informative for the thermodynamic parameters involved in both vapor-liquid equilibrium, and in the chemical equilibrium of the liquid phase.

  1. Estimating abundance in the presence of species uncertainty

    USGS Publications Warehouse

    Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.

    2016-01-01

    1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.

  2. Molecular identification of organic compounds in atmospheric complex mixtures and relationship to atmospheric chemistry and sources.

    PubMed

    Mazurek, Monica A

    2002-12-01

    This article describes a chemical characterization approach for complex organic compound mixtures associated with fine atmospheric particles of diameters less than 2.5 m (PM2.5). It relates molecular- and bulk-level chemical characteristics of the complex mixture to atmospheric chemistry and to emission sources. Overall, the analytical approach describes the organic complex mixtures in terms of a chemical mass balance (CMB). Here, the complex mixture is related to a bulk elemental measurement (total carbon) and is broken down systematically into functional groups and molecular compositions. The CMB and molecular-level information can be used to understand the sources of the atmospheric fine particles through conversion of chromatographic data and by incorporation into receptor-based CMB models. Once described and quantified within a mass balance framework, the chemical profiles for aerosol organic matter can be applied to existing air quality issues. Examples include understanding health effects of PM2.5 and defining and controlling key sources of anthropogenic fine particles. Overall, the organic aerosol compositional data provide chemical information needed for effective PM2.5 management.

  3. Generation of two-dimensional binary mixtures in complex plasmas

    NASA Astrophysics Data System (ADS)

    Wieben, Frank; Block, Dietmar

    2016-10-01

    Complex plasmas are an excellent model system for strong coupling phenomena. Under certain conditions the dust particles immersed into the plasma form crystals which can be analyzed in terms of structure and dynamics. Previous experiments focussed mostly on monodisperse particle systems whereas dusty plasmas in nature and technology are polydisperse. Thus, a first and important step towards experiments in polydisperse systems are binary mixtures. Recent experiments on binary mixtures under microgravity conditions observed a phase separation of particle species with different radii even for small size disparities. This contradicts several numerical studies of 2D binary mixtures. Therefore, dedicated experiments are required to gain more insight into the physics of polydisperse systems. In this contribution first ground based experiments on two-dimensional binary mixtures are presented. Particular attention is paid to the requirements for the generation of such systems which involve the consideration of the temporal evolution of the particle properties. Furthermore, the structure of these two-component crystals is analyzed and compared to simulations. This work was supported by the Deutsche Forschungsgemeinschaft DFG in the framework of the SFB TR24 Greifswald Kiel, Project A3b.

  4. Quantifying structural states of soft mudrocks

    NASA Astrophysics Data System (ADS)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  5. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  6. A flamelet model for transcritical LOx/GCH4 flames

    NASA Astrophysics Data System (ADS)

    Müller, Hagen; Pfitzner, Michael

    2017-03-01

    This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.

  7. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    PubMed Central

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  8. Xenon Recovery at Room Temperature using Metal-Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsaidi, Sameh K.; Ongari, Daniele; Xu, Wenqian

    2017-07-24

    Xenon is known to be a very efficient anesthetic gas but its cost prohibits the wider use in medical industry and other potential applications. It has been shown that Xe recovery and recycle from anesthetic gas mixture can significantly reduce its cost as anesthetic. The current technology uses series of adsorbent columns followed by low temperature distillation to recover Xe, which is expensive to use in medical facilities. Herein, we propose much efficient and simpler system to recover and recycle Xe from simulant exhale anesthetic gas mixture at room temperature using metal organic frameworks. Among the MOFs tested, PCN-12 exhibitsmore » unprecedented performance with high Xe capacity, Xe/O2, Xe/N2 and Xe/CO2 selectivity at room temperature. The in-situ synchrotron measurements suggest the Xe is occupied in the small pockets of PCN-12 compared to unsaturated metal centers (UMCs). Computational modeling of adsorption further supports our experimental observation of Xe binding sites in PCN-12.« less

  9. Xenon Recovery at Room Temperature using Metal-Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsaidi, Sameh K.; Ongari, Daniele; Xu, Wenqian

    2017-07-24

    Xenon is known to be a very efficient anesthetic gas but its cost prohibits the wider use in medical industry and other potential applications. It has been shown that Xe recovery and recycle from anesthetic gas mixture can significantly reduce its cost as anesthetic. The current technology uses series of adsorbent columns followed by low temperature distillation to recover Xe, which is expensive to use in medical facilities. Herein, we propose much efficient and simpler system to recover and recycle Xe from simulant exhale anesthetic gas mixture at room temperature using metal organic frameworks. Among the MOFs tested, PCN-12 exhibitsmore » unprecedented performance with high Xe capacity, Xe/N2 and Xe/O2 selectivity at room temperature. The in-situ synchrotron measurements suggest the Xe is occupied in the small pockets of PCN-12 compared to unsaturated metal centers (UMCs). Computational modeling of adsorption further supports our experimental observation of Xe binding sites in PCN-12.« less

  10. Constraints based analysis of extended cybernetic models.

    PubMed

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Behavioural and biochemical responses to metals tested alone or in mixture (Cd-Cu-Ni-Pb-Zn) in Gammarus fossarum: From a multi-biomarker approach to modelling metal mixture toxicity.

    PubMed

    Lebrun, Jérémie D; Uher, Emmanuelle; Fechner, Lise C

    2017-12-01

    Metals are usually present as mixtures at low concentrations in aquatic ecosystems. However, the toxicity and sub-lethal effects of metal mixtures on organisms are still poorly addressed in environmental risk assessment. Here we investigated the biochemical and behavioural responses of Gammarus fossarum to Cu, Cd, Ni, Pb and Zn tested individually or in mixture (M2X) at concentrations twice the levels of environmental quality standards (EQSs) from the European Water Framework Directive. The same metal mixture was also tested with concentrations equivalent to EQSs (M1X), thus in a regulatory context, as EQSs are proposed to protect aquatic biota. For each exposure condition, mortality, locomotion, respiration and enzymatic activities involved in digestive metabolism and moult were monitored over a 120h exposure period. Multi-metric variations were summarized by the integrated biomarker response index (IBR). Mono-metallic exposures shed light on biological alterations occurring at environmental exposure levels in gammarids and depending on the considered metal and gender. As regards mixtures, biomarkers were altered for both M2X and M1X. However, no additive or synergistic effect of metals was observed comparing to mono-metallic exposures. Indeed, bioaccumulation data highlighted competitive interactions between metals in M2X, decreasing subsequently their internalisation and toxicity. IBR values indicated that the health of gammarids was more impacted by M1X than M2X, because of reduced competitions and enhanced uptakes of metals for the mixture at lower, EQS-like concentrations. Models using bioconcentration data obtained from mono-metallic exposures generated successful predictions of global toxicity both for M1X and M2X. We conclude that sub-lethal effects of mixtures identified by the multi-biomarker approach can lead to disturbances in population dynamics of gammarids. Although IBR-based models offer promising lines of enquiry to predict metal mixture toxicity, further studies are needed to confirm their predictive quality on larger ranges of metallic combinations before their use in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Selective gas capture via kinetic trapping

    DOE PAGES

    Kundu, Joyjit; Pascal, Tod; Prendergast, David; ...

    2016-07-13

    Conventional approaches to the capture of CO 2 by metal-organic frameworks focus on equilibrium conditions, and frameworks that contain little CO 2 in equilibrium are often rejected as carbon-capture materials. Here we use a statistical mechanical model, parameterized by quantum mechanical data, to suggest that metal-organic frameworks can be used to separate CO 2 from a typical flue gas mixture when used under nonequilibrium conditions. The origin of this selectivity is an emergent gas-separation mechanism that results from the acquisition by different gas types of different mobilities within a crowded framework. The resulting distribution of gas types within the frameworkmore » is in general spatially and dynamically heterogeneous. Our results suggest that relaxing the requirement of equilibrium can substantially increase the parameter space of conditions and materials for which selective gas capture can be effected.« less

  13. Using Epidemiological Principles to Explain Fungicide Resistance Management Tactics: Why do Mixtures Outperform Alternations?

    PubMed

    Elderfield, James A D; Lopez-Ruiz, Francisco J; van den Bosch, Frank; Cunniffe, Nik J

    2018-07-01

    Whether fungicide resistance management is optimized by spraying chemicals with different modes of action as a mixture (i.e., simultaneously) or in alternation (i.e., sequentially) has been studied by experimenters and modelers for decades. However, results have been inconclusive. We use previously parameterized and validated mathematical models of wheat Septoria leaf blotch and grapevine powdery mildew to test which tactic provides better resistance management, using the total yield before resistance causes disease control to become economically ineffective ("lifetime yield") to measure effectiveness. We focus on tactics involving the combination of a low-risk and a high-risk fungicide, and the case in which resistance to the high-risk chemical is complete (i.e., in which there is no partial resistance). Lifetime yield is then optimized by spraying as much low-risk fungicide as is permitted, combined with slightly more high-risk fungicide than needed for acceptable initial disease control, applying these fungicides as a mixture. That mixture rather than alternation gives better performance is invariant to model parameterization and structure, as well as the pathosystem in question. However, if comparison focuses on other metrics, e.g., lifetime yield at full label dose, either mixture or alternation can be optimal. Our work shows how epidemiological principles can explain the evolution of fungicide resistance, and also highlights a theoretical framework to address the question of whether mixture or alternation provides better resistance management. It also demonstrates that precisely how spray tactics are compared must be given careful consideration. [Formula: see text] Copyright © 2018 The Author(s). This is an open access article distributed under the CC BY 4.0 International license .

  14. Community detection for networks with unipartite and bipartite structure

    NASA Astrophysics Data System (ADS)

    Chang, Chang; Tang, Chao

    2014-09-01

    Finding community structures in networks is important in network science, technology, and applications. To date, most algorithms that aim to find community structures only focus either on unipartite or bipartite networks. A unipartite network consists of one set of nodes and a bipartite network consists of two nonoverlapping sets of nodes with only links joining the nodes in different sets. However, a third type of network exists, defined here as the mixture network. Just like a bipartite network, a mixture network also consists of two sets of nodes, but some nodes may simultaneously belong to two sets, which breaks the nonoverlapping restriction of a bipartite network. The mixture network can be considered as a general case, with unipartite and bipartite networks viewed as its limiting cases. A mixture network can represent not only all the unipartite and bipartite networks, but also a wide range of real-world networks that cannot be properly represented as either unipartite or bipartite networks in fields such as biology and social science. Based on this observation, we first propose a probabilistic model that can find modules in unipartite, bipartite, and mixture networks in a unified framework based on the link community model for a unipartite undirected network [B Ball et al (2011 Phys. Rev. E 84 036103)]. We test our algorithm on synthetic networks (both overlapping and nonoverlapping communities) and apply it to two real-world networks: a southern women bipartite network and a human transcriptional regulatory mixture network. The results suggest that our model performs well for all three types of networks, is competitive with other algorithms for unipartite or bipartite networks, and is applicable to real-world networks.

  15. Leveraging constraints and biotelemetry data to pinpoint repetitively used spatial features

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Small, Robert J.

    2016-01-01

    Satellite telemetry devices collect valuable information concerning the sites visited by animals, including the location of central places like dens, nests, rookeries, or haul‐outs. Existing methods for estimating the location of central places from telemetry data require user‐specified thresholds and ignore common nuances like measurement error. We present a fully model‐based approach for locating central places from telemetry data that accounts for multiple sources of uncertainty and uses all of the available locational data. Our general framework consists of an observation model to account for large telemetry measurement error and animal movement, and a highly flexible mixture model specified using a Dirichlet process to identify the location of central places. We also quantify temporal patterns in central place use by incorporating ancillary behavioral data into the model; however, our framework is also suitable when no such behavioral data exist. We apply the model to a simulated data set as proof of concept. We then illustrate our framework by analyzing an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that exhibits fidelity to terrestrial haul‐out sites.

  16. Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.

    PubMed

    Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C

    2011-03-01

    Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.

  17. Dictionary-based fiber orientation estimation with improved spatial consistency.

    PubMed

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Testing the accuracy of correlations for multicomponent mass transport of adsorbed gases in metal-organic frameworks: diffusion of H2/CH4 mixtures in CuBTC.

    PubMed

    Keskin, Seda; Liu, Jinchen; Johnson, J Karl; Sholl, David S

    2008-08-05

    Mass transport of chemical mixtures in nanoporous materials is important in applications such as membrane separations, but measuring diffusion of mixtures experimentally is challenging. Methods that can predict multicomponent diffusion coefficients from single-component data can be extremely useful if these methods are known to be accurate. We present the first test of a method of this kind for molecules adsorbed in a metal-organic framework (MOF). Specifically, we examine the method proposed by Skoulidas, Sholl, and Krishna (SSK) ( Langmuir, 2003, 19, 7977) by comparing predictions made with this method to molecular simulations of mixture transport of H 2/CH 4 mixtures in CuBTC. These calculations provide the first direct information on mixture transport of any species in a MOF. The predictions of the SSK approach are in good agreement with our direct simulations of binary diffusion, suggesting that this approach may be a powerful one for examining multicomponent diffusion in MOFs. We also use our molecular simulation data to test the ideal adsorbed solution theory method for predicting binary adsorption isotherms and a method for predicting mixture self-diffusion coefficients.

  19. Hierarchical kernel mixture models for the prediction of AIDS disease progression using HIV structural gp120 profiles

    PubMed Central

    2010-01-01

    Changes to the glycosylation profile on HIV gp120 can influence viral pathogenesis and alter AIDS disease progression. The characterization of glycosylation differences at the sequence level is inadequate as the placement of carbohydrates is structurally complex. However, no structural framework is available to date for the study of HIV disease progression. In this study, we propose a novel machine-learning based framework for the prediction of AIDS disease progression in three stages (RP, SP, and LTNP) using the HIV structural gp120 profile. This new intelligent framework proves to be accurate and provides an important benchmark for predicting AIDS disease progression computationally. The model is trained using a novel HIV gp120 glycosylation structural profile to detect possible stages of AIDS disease progression for the target sequences of HIV+ individuals. The performance of the proposed model was compared to seven existing different machine-learning models on newly proposed gp120-Benchmark_1 dataset in terms of error-rate (MSE), accuracy (CCI), stability (STD), and complexity (TBM). The novel framework showed better predictive performance with 67.82% CCI, 30.21 MSE, 0.8 STD, and 2.62 TBM on the three stages of AIDS disease progression of 50 HIV+ individuals. This framework is an invaluable bioinformatics tool that will be useful to the clinical assessment of viral pathogenesis. PMID:21143806

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Ting; Feng, Xuhui; Elsaidi, Sameh K.

    Herein, we demonstrate that a prototypical type of metal organic framework, zeolitic imidazolate framework-8 (ZIF-8), in membrane form, can effectively separate Kr/Xe gas mixtures at industrially relevant compositions. The best membranes separated Kr/Xe mixtures with average Kr permeances as high as 1.5 × 10 -8 ± 0.2 mol/m 2 s Pa and average separation selectivities of 14.2 ± 1.9 for molar feed compositions corresponding to Kr/Xe ratio encountered typically in air. Molecular sieving, competitive adsorption, and differences in diffusivities were identified as the prevailing separation mechanisms. These membranes potentially represent a less-energy-intensive alternative to cryogenic distillation, which is the benchmarkmore » technology used to separate this challenging gas mixture. To our best knowledge, this is the first example of any metal organic membrane composition displaying separation ability for Kr/Xe gas mixtures.« less

  1. Vibration effect on the Soret-induced convection of ternary mixture in a rectangular cavity heated from below

    NASA Astrophysics Data System (ADS)

    Lyubimova, T. P.; Zubova, N. A.

    2017-06-01

    This paper presents the results of numerical simulation of the Soret-induced convection of ternary mixture in the rectangular cavity elongated in horizontal direction in gravity field. The cavity has rigid impermeable boundaries. It is heated from the bellow and undergoes translational linearly polarized vibrations of finite amplitude and frequency in the horizontal direction. The problem is solved by finite difference method in the framework of full unsteady non-linear approach. The procedure of diagonalization of the molecular diffusion coefficient matrix is applied, allowing to eliminate cross-diffusion components in the equations and to reduce the number of the governing parameters. The calculations are performed for model ternary mixture with positive separation ratios of the components. The data on the vibration effect on temporal evolution of instantaneous and average fields and integral characteristics of the flow and heat and mass transfer at different levels of gravity are obtained.

  2. How is genetic testing evaluated? A systematic review of the literature.

    PubMed

    Pitini, Erica; De Vito, Corrado; Marzuillo, Carolina; D'Andrea, Elvira; Rosso, Annalisa; Federici, Antonio; Di Maria, Emilio; Villari, Paolo

    2018-05-01

    Given the rapid development of genetic tests, an assessment of their benefits, risks, and limitations is crucial for public health practice. We performed a systematic review aimed at identifying and comparing the existing evaluation frameworks for genetic tests. We searched PUBMED, SCOPUS, ISI Web of Knowledge, Google Scholar, Google, and gray literature sources for any documents describing such frameworks. We identified 29 evaluation frameworks published between 2000 and 2017, mostly based on the ACCE Framework (n = 13 models), or on the HTA process (n = 6), or both (n = 2). Others refer to the Wilson and Jungner screening criteria (n = 3) or to a mixture of different criteria (n = 5). Due to the widespread use of the ACCE Framework, the most frequently used evaluation criteria are analytic and clinical validity, clinical utility and ethical, legal and social implications. Less attention is given to the context of implementation. An economic dimension is always considered, but not in great detail. Consideration of delivery models, organizational aspects, and consumer viewpoint is often lacking. A deeper analysis of such context-related evaluation dimensions may strengthen a comprehensive evaluation of genetic tests and support the decision-making process.

  3. Molecular simulation investigation into the performance of Cu-BTC metal-organic frameworks for carbon dioxide-methane separations.

    PubMed

    Gutiérrez-Sevillano, Juan José; Caro-Pérez, Alejandro; Dubbeldam, David; Calero, Sofía

    2011-12-07

    We report a molecular simulation study for Cu-BTC metal-organic frameworks as carbon dioxide-methane separation devices. For this study we have computed adsorption and diffusion of methane and carbon dioxide in the structure, both as pure components and mixtures over the full range of bulk gas compositions. From the single component isotherms, mixture adsorption is predicted using the ideal adsorbed solution theory. These predictions are in very good agreement with our computed mixture isotherms and with previously reported data. Adsorption and diffusion selectivities and preferential sitings are also discussed with the aim to provide new molecular level information for all studied systems.

  4. Gas adsorption and gas mixture separations using mixed-ligand MOF material

    DOEpatents

    Hupp, Joseph T [Northfield, IL; Mulfort, Karen L [Chicago, IL; Snurr, Randall Q [Evanston, IL; Bae, Youn-Sang [Evanston, IL

    2011-01-04

    A method of separating a mixture of carbon dioxiode and hydrocarbon gas using a mixed-ligand, metal-organic framework (MOF) material having metal ions coordinated to carboxylate ligands and pyridyl ligands.

  5. Linking stressors and ecological responses

    USGS Publications Warehouse

    Gentile, J.H.; Solomon, K.R.; Butcher, J.B.; Harrass, M.; Landis, W.G.; Power, M.; Rattner, B.A.; Warren-Hicks, W.J.; Wenger, R.; Foran, Jeffery A.; Ferenc, Susan A.

    1999-01-01

    To characterize risk, it is necessary to quantify the linkages and interactions between chemical, physical and biological stressors and endpoints in the conceptual framework for ecological risk assessment (ERA). This can present challenges in a multiple stressor analysis, and it will not always be possible to develop a quantitative stressor-response profile. This review commences with a conceptual representation of the problem of developing a linkage analysis for multiple stressors and responses. The remainder of the review surveys a variety of mathematical and statistical methods (e.g., ranking methods, matrix models, multivariate dose-response for mixtures, indices, visualization, simulation modeling and decision-oriented methods) for accomplishing the linkage analysis for multiple stressors. Describing the relationships between multiple stressors and ecological effects are critical components of 'effects assessment' in the ecological risk assessment framework.

  6. A multiscale quasi-continuum theory to determine thermodynamic properties of fluid mixtures in nanochannels

    NASA Astrophysics Data System (ADS)

    Motevaselian, Mohammad Hossein; Mashayak, Sikandar Y.; Aluru, Narayana R.

    2015-11-01

    We present an empirical potential-based quasi-continuum theory (EQT) that seamlessly integrates the interatomic potentials into a continuum framework such as the Nernst-Planck equation. EQT is a simple and fast approach, which provides accurate predictions of potential of mean force (PMF) and density distribution of confined fluids at multiple length-scales, ranging from few Angstroms to macro meters. The EQT potentials can be used to construct the excess free energy functional in the classical density functional theory (cDFT). The combination of EQT and cDFT (EQT-cDFT), allows one to predict the thermodynamic properties of confined fluids. Recently, the EQT-cDFT framework was developed for single component LJ fluids confined in slit-like graphene channels. In this work, we extend the framework to confined LJ fluid mixtures and demonstrate it by simulating a mixture of methane and hydrogen molecules inside slit-like graphene channels. We show that the EQT-cDFT predictions for the structure of the confined fluid mixture compare well with the MD simulations. In addition, our results show that graphene nanochannels exhibit a selective adsorption of methane over hydrogen.

  7. A Combined Kinetic and Volatility Basis Set Approach to Model Secondary Organic Aerosol from Toluene and Diesel Exhaust/Meat Cooking Mixtures

    NASA Astrophysics Data System (ADS)

    Parikh, H. M.; Carlton, A. G.; Zhang, H.; Kamens, R.; Vizuete, W.

    2011-12-01

    Secondary organic aerosol (SOA) is simulated for 6 outdoor smog chamber experiments using a SOA model based on a kinetic chemical mechanism in conjunction with a volatility basis set (VBS) approach. The experiments include toluene, a non-SOA-forming hydrocarbon mixture, diesel exhaust or meat cooking emissions and NOx, and are performed under varying conditions of relative humidity. SOA formation from toluene is modeled using a condensed kinetic aromatic mechanism that includes partitioning of lumped semi-volatile products in particle organic-phase and incorporates particle aqueous-phase chemistry to describe uptake of glyoxal and methylglyoxal. Modeling using the kinetic mechanism alone, along with primary organic aerosol (POA) from diesel exhaust (DE) /meat cooking (MC) fails to simulate the rapid SOA formation at the beginning hours of the experiments. Inclusion of a VBS approach with the kinetic mechanism to characterize the emissions and chemistry of complex mixture of intermediate volatility organic compounds (IVOCs) from DE/MC, substantially improves SOA predictions when compared with observed data. The VBS model includes photochemical aging of IVOCs and evaporation of POA after dilution. The relative contribution of SOA mass from DE/MC is as high as 95% in the morning, but substantially decreases after mid-afternoon. For high humidity experiments, aqueous-phase SOA fraction dominates the total SOA mass at the end of the day (approximately 50%). In summary, the combined kinetic and VBS approach provides a new and improved framework to semi-explicitly model SOA from VOC precursors in conjunction with a VBS approach that can be used on complex emission mixtures comprised with hundreds of individual chemical species.

  8. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  9. Integrated presentation of ecological risk from multiple stressors

    NASA Astrophysics Data System (ADS)

    Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman

    2016-10-01

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.

  10. Integrated presentation of ecological risk from multiple stressors.

    PubMed

    Goussen, Benoit; Price, Oliver R; Rendal, Cecilie; Ashauer, Roman

    2016-10-26

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.

  11. Multi-atlas segmentation for abdominal organs with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-03-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  12. On the Theory of Reactive Mixtures for Modeling Biological Growth

    PubMed Central

    Ateshian, Gerard A.

    2013-01-01

    Mixture theory, which can combine continuum theories for the motion and deformation of solids and fluids with general principles of chemistry, is well suited for modeling the complex responses of biological tissues, including tissue growth and remodeling, tissue engineering, mechanobiology of cells and a variety of other active processes. A comprehensive presentation of the equations of reactive mixtures of charged solid and fluid constituents is lacking in the biomechanics literature. This study provides the conservation laws and entropy inequality, as well as interface jump conditions, for reactive mixtures consisting of a constrained solid mixture and multiple fluid constituents. The constituents are intrinsically incompressible and may carry an electrical charge. The interface jump condition on the mass flux of individual constituents is shown to define a surface growth equation, which predicts deposition or removal of material points from the solid matrix, complementing the description of volume growth described by the conservation of mass. A formu-lation is proposed for the reference configuration of a body whose material point set varies with time. State variables are defined which can account for solid matrix volume growth and remodeling. Constitutive constraints are provided on the stresses and momentum supplies of the various constituents, as well as the interface jump conditions for the electrochem cal potential of the fluids. Simplifications appropriate for biological tissues are also proposed, which help reduce the governing equations into a more practical format. It is shown that explicit mechanisms of growth-induced residual stresses can be predicted in this framework. PMID:17206407

  13. Myocardium Segmentation From DE MRI Using Multicomponent Gaussian Mixture Model and Coupled Level Set.

    PubMed

    Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu

    2017-11-01

    Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.

  14. From micro-correlations to macro-correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2016-11-15

    Random vectors with a symmetric correlation structure share a common value of pair-wise correlation between their different components. The symmetric correlation structure appears in a multitude of settings, e.g. mixture models. In a mixture model the components of the random vector are drawn independently from a general probability distribution that is determined by an underlying parameter, and the parameter itself is randomized. In this paper we study the overall correlation of high-dimensional random vectors with a symmetric correlation structure. Considering such a random vector, and terming its pair-wise correlation “micro-correlation”, we use an asymptotic analysis to derive the random vector’smore » “macro-correlation” : a score that takes values in the unit interval, and that quantifies the random vector’s overall correlation. The method of obtaining macro-correlations from micro-correlations is then applied to a diverse collection of frameworks that demonstrate the method’s wide applicability.« less

  15. Modelling and calculation of flotation process in one-dimensional formulation

    NASA Astrophysics Data System (ADS)

    Amanbaev, Tulegen; Tilleuov, Gamidulla; Tulegenova, Bibigul

    2016-08-01

    In the framework of the assumptions of the mechanics of the multiphase media is constructed a mathematical model of the flotation process in the dispersed mixture of liquid, solid and gas phases, taking into account the degree of mineralization of the surface of the bubbles. Application of the constructed model is demonstrated on the example of one-dimensional stationary flotation and it is shown that the equations describing the process of ascent of the bubbles are singularly perturbed ("rigid"). The effect of size and concentration of bubbles and the volumetric content of dispersed particles on the flotation process are analyzed.

  16. Intensification process of air-hydrogen mixture burning in the variable cross section channel by means of the air jet

    NASA Astrophysics Data System (ADS)

    Zamuraev, V. P.; Kalinina, A. P.

    2018-03-01

    The paper presents the results of numerical modeling of a transonic region formation in the flat channel. Hydrogen flows into the channel through the holes in the wall. The jet of compressed air is localized downstream the holes. The transonic region formation is formed by the burning of heterogeneous hydrogen-air mixture. It was considered in the framework of the simplified chemical kinetics. The interesting feature of the regime obtained is the following: the distribution of the Mach numbers is qualitatively similar to the case of pulse-periodic energy sources. This mode is a favorable prerequisite for the effective fuel combustion in the expanding part of the channel when injecting fuel into this part.

  17. Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.

    PubMed

    Rabosky, Daniel L

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.

  18. Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees

    PubMed Central

    Rabosky, Daniel L.

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858

  19. Cumulative Effects of In Utero Administration of Mixtures of Reproductive Toxicants that Disrupt Common Target Tissues via Diverse Mechanisms of Toxicity

    PubMed Central

    Rider, Cynthia V.; Furr, Johnathan R.; Wilson, Vickie S.; Gray, L. Earl

    2010-01-01

    Although risk assessments are typically conducted on a chemical-by-chemical basis, the 1996 Food Quality Protection Act required the US Environmental Protection Agency to consider cumulative risk of chemicals that act via a common mechanism of toxicity. To this end, we are conducting studies with mixtures of chemicals to elucidate mechanisms of joint action at the systemic level with the end goal of providing a framework for assessing the cumulative effects of reproductive toxicants. Previous mixture studies conducted with antiandrogenic chemicals are reviewed briefly and two new studies are described in detail. In all binary mixture studies, rats were dosed during pregnancy with chemicals, singly or in pairs at dosage levels equivalent to approximately one half of the ED50 for hypospadias or epididymal agenesis. The binary mixtures included: androgen receptor (AR) antagonists (vinclozolin plus procymidone), phthalate esters (DBP plus BBP and DEHP plus DBP), a phthalate ester plus an AR antagonist (DBP plus procymidone), a mixed mechanism androgen signaling disruptor (linuron) plus BBP, and two chemicals which disrupt epididymal differentiation through entirely different toxicity pathways: DBP (AR pathway) plus 2,3,7,8 TCDD (AhR pathway). We also conducted multi-component mixture studies combining several “antiandrogens” together. In the first study, seven chemicals (four pesticides and three phthalates) that elicit antiandrogenic effects at two different sites in the androgen signaling pathway (i.e. AR antagonist or inhibition of androgen synthesis) were combined. In the second study, three additional phthalates were added to make a ten chemical mixture. In both the binary mixture studies and the multi-component mixture studies, chemicals that targeted male reproductive tract development displayed cumulative effects that exceeded predictions based upon a response addition model and most often were in accordance with predictions based upon dose addition models. In summary, our results indicate that compounds that act by disparate mechanisms of toxicity to disrupt the dynamic interactions among the interconnected signaling pathways in differentiating tissues produce cumulative dose-additive effects, regardless of the mechanism or mode of action of the individual mixture component. PMID:20487044

  20. Characterization of low-temperature properties of plant-produced rap mixtures in the Northeast

    NASA Astrophysics Data System (ADS)

    Medeiros, Marcelo S., Junior

    The dissertation outlined herein results from a Federal Highway Administration sponsored project intended to investigate the impacts of high percentages of RAP material in the performance of pavements under cold climate conditions. It is comprised of two main sections that were incorporated into the body of this dissertation as Part I and Part II. In Part I a reduced testing framework for analysis of HMA mixes was proposed to replace the IDT creep compliance and strength testing by dynamic modulus and fatigue tests performed on an AMPT device. A continuum damage model that incorporates the nonlinear constitutive behavior of the HMA mixtures was also successfully implemented and validated. Mixtures with varying percentages of reclaimed material (RAP) ranging from 0 to 40% were used in this research effort in order to verify the applicability of the proposed methodology to RAP mixtures. Part II is concerned with evaluating the effects of various binder grades on the properties of plant-produced mixtures with various percentages of RAP. The effects of RAP on mechanical and rheological properties of mixtures and extracted binders were studied in order to identify some of the deficiencies in the current production methodologies. The results of this dissertation will help practitioners to identify optimal RAP usage from a material property perspective. It also establishes some guidelines and best practices for the use of higher RAP percentages in HMA.

  1. Hidden drivers of low-dose pharmaceutical pollutant mixtures revealed by the novel GSA-QHTS screening method

    PubMed Central

    Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca

    2016-01-01

    The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294

  2. Methods for compressible multiphase flows and their applications

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  3. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

  4. Bayesian mixture analysis for metagenomic community profiling.

    PubMed

    Morfopoulou, Sofia; Plagnol, Vincent

    2015-09-15

    Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.

  5. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.

  6. Where's the reef: the role of framework in the Holocene

    USGS Publications Warehouse

    Hubbard, D.K.; Burke, R.B.; Gill, I.P.

    1998-01-01

    Holocene reef models generally emphasize the role of in-place and interlocking framework in the creation of a rigid structure that rises above its surroundings. By extension, a number of ancient biohermal deposits have been disqualified as 'true reefs' owing to their lack of recognizable framework. Fifty-four cores from several eastern Caribbean sites clearly demonstrate that in-place and interlocking framework is not common in these reefs that are comprised of varying mixtures of recognizable coral (primary framework), loose sediment/rubble and secondary framework made up mostly of coralgal fragments bound together by submarine cementation and biological encrustation. Recvovery of primary and secondary framework ranged from 22% (avg.) in branching-coral facies to 33% in intervals dominated by head corals. Accretion rate decreases as expected with water depth. However, the recovery of recognizable coral generally increased with water depth, inversely to presumed coral-growth rates. This pattern reflects a spectrum in the relative importance of coral growth (primary construction), bioerosion, hydromechanical breakdown and the transport of sediment and detritus. The relative importance of each is controlled by the physical-oceanographic conditions at the stie of reef development and will dictate both the architecture and the character of its internal fabric. We do not propose that framework reeds do not exist, as they most assuredly do. However, the fact that so many modern reefs are not dominated by in-place and interlocking framework suggests that its use as the primary determinant of ancient reefs may be unreasonable. We, therefore, propose the abandonment of framework-based models in favor of those that treat framework generation, physical/biological degradation, sedimentation, and encrustation as equal partners in the development of modern and ancient reefs alike.

  7. Integrated fate modeling for exposure assessment of produced water on the Sable Island Bank (Scotian shelf, Canada).

    PubMed

    Berry, Jody A; Wells, Peter G

    2004-10-01

    Produced water is the largest waste discharge from the production phase of oil and gas wells. Produced water is a mixture of reservoir formation water and production chemicals from the separation process. This creates a chemical mixture that has several components of toxic concern, ranging from heavy metals to soluble hydrocarbons. Analysis of potential environmental effects from produced water in the Sable Island Bank region (NS, Canada) was conducted using an integrated modeling approach according to the ecological risk assessment framework. A hydrodynamic dispersion model was used to describe the wastewater plume. A second fugacity-based model was used to describe the likely plume partitioning in the local environmental media of water, suspended sediment, biota, and sediment. Results from the integrated modeling showed that the soluble benzene and naphthalene components reach chronic no-effect concentration levels at a distance of 1.0 m from the discharge point. The partition modeling indicated that low persistence was expected because of advection forces caused by tidal currents for the Sable Island Bank system. The exposure assessment for the two soluble hydrocarbon components suggests that the risks of adverse environmental effects from produced water on Sable Island Bank are low.

  8. A model of icebergs and sea ice in a joint continuum framework

    NASA Astrophysics Data System (ADS)

    Vaňková, Irena; Holland, David M.

    2017-04-01

    The ice mélange, a mixture of sea ice and icebergs, often present in front of tidewater glaciers in Greenland or ice shelves in Antarctica, can have a profound effect on the dynamics of the ice-ocean system. The current inability to numerically model the ice mélange motivates a new modeling approach proposed here. A continuum sea-ice model is taken as a starting point and icebergs are represented as thick and compact pieces of sea ice held together by large tensile and shear strength selectively introduced into the sea ice rheology. In order to modify the rheology correctly, a semi-Lagrangian time stepping scheme is introduced and at each time step a Lagrangian grid is constructed such that iceberg shape is preserved exactly. With the proposed treatment, sea ice and icebergs are considered a single fluid with spatially varying rheological properties, mutual interactions are thus automatically included without the need of further parametrization. An important advantage of the presented framework for an ice mélange model is its potential to be easily included in existing climate models.

  9. A Model of Icebergs and Sea Ice in a Joint Continuum Framework

    NASA Astrophysics Data System (ADS)

    VaÅková, Irena; Holland, David M.

    2017-11-01

    The ice mélange, a mixture of sea ice and icebergs, often present in front of outlet glaciers in Greenland or ice shelves in Antarctica, can have a profound effect on the dynamics of the ice-ocean system. The current inability to numerically model the ice mélange motivates a new modeling approach proposed here. A continuum sea-ice model is taken as a starting point and icebergs are represented as thick and compact pieces of sea ice held together by large tensile and shear strength, selectively introduced into the sea-ice rheology. In order to modify the rheology correctly, an iceberg tracking procedure is implemented within a semi-Lagrangian time-stepping scheme, designed to exactly preserve iceberg shape through time. With the proposed treatment, sea ice and icebergs are considered a single fluid with spatially varying rheological properties. Mutual interactions are thus automatically included without the need for further parametrization. An important advantage of the presented framework for an ice mélange model is its potential to be easily included within sea-ice components of existing climate models.

  10. Reference interaction site model with hydrophobicity induced density inhomogeneity: An analytical theory to compute solvation properties of large hydrophobic solutes in the mixture of polyatomic solvent molecules.

    PubMed

    Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui

    2015-08-07

    Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute.

  11. Assessing, mapping and validating site-specific ecotoxicological risk for pesticide mixtures: a case study for small scale hot spots in aquatic and terrestrial environments.

    PubMed

    Vaj, Claudia; Barmaz, Stefania; Sørensen, Peter Borgen; Spurgeon, David; Vighi, Marco

    2011-11-01

    Mixture toxicity is a real world problem and as such requires risk assessment solutions that can be applied within different geographic regions, across different spatial scales and in situations where the quantity of data available for the assessment varies. Moreover, the need for site specific procedures for assessing ecotoxicological risk for non-target species in non-target ecosystems also has to be recognised. The work presented in the paper addresses the real world effects of pesticide mixtures on natural communities. Initially, the location of risk hotspots is theoretically estimated through exposure modelling and the use of available toxicity data to predict potential community effects. The concept of Concentration Addition (CA) is applied to describe responses resulting from exposure of multiple pesticides The developed and refined exposure models are georeferenced (GIS-based) and include environmental and physico-chemical parameters, and site specific information on pesticide usage and land use. As a test of the risk assessment framework, the procedures have been applied on a suitable study areas, notably the River Meolo basin (Northern Italy), a catchment characterised by intensive agriculture, as well as comparative area for some assessments. Within the studied areas, the risks for individual chemicals and complex mixtures have been assessed on aquatic and terrestrial aboveground and belowground communities. Results from ecological surveys have been used to validate these risk assessment model predictions. Value and limitation of the approaches are described and the possibilities for larger scale applications in risk assessment are also discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Assessment of wastewater treatment plant effluent on fish reproduction utilizing the adverse outcome pathway conceptual framework

    EPA Science Inventory

    Wastewater treatment plant (WWTP) effluents are a known contributor of chemical mixture inputs into the environment. Whole effluent testing guidelines were developed to screen these complex mixtures for acute toxicity. However, efficient and cost-effective approaches for screenin...

  13. Influence of Dense Inert Additives (W and Pb) on Detonation Conditions and Regime of Condensed Explosives

    NASA Astrophysics Data System (ADS)

    Imkhovik, Nikolay A.

    2010-10-01

    Results of experimental and theoretical studies of the unusual detonation properties of mixtures of high explosives (HEs) with high-density inert additives W and Pb were analyzed and systematized. Typical examples of the nonideal detonation of composite explosives for which the measured detonation pressure is substantially lower and the detonation velocity is higher than the values calculated within the framework of the hydrodynamic model, with the specific heat ratio for the detonation products of ∼6-8, are presented. Mechanisms of formation of anomalous pressure and mass velocity profiles, which explain the correlation between the Chapman-Jouguet pressure for HE-W and HE-Pb mixtures, the velocity of the free surface of duralumin target, and the depth of the dent imprinted in steel witness plates, are described.

  14. Fast gas heating and radial distribution of active species in nanosecond capillary discharge in pure nitrogen and N2:O2 mixtures

    NASA Astrophysics Data System (ADS)

    Lepikhin, N. D.; Popov, N. A.; Starikovskaia, S. M.

    2018-05-01

    Fast gas heating is studied experimentally and numerically using pulsed nanosecond capillary discharge in pure nitrogen and N2:O2 mixtures under the conditions of high specific deposited energy (up to 1 eV/molecule) and high reduced electric fields (100–300 Td). Deposited energy, electric field and gas temperature are measured as functions of time. The radial distribution of active species is analyzed experimentally. The roles of processes involving {{{N}}}2({{B}}) ={{{N}}}2({{{B}}}3{{{\\Pi }}}{{g}},{{{W}}}3{{{Δ }}}{{u}},{{B}}{{\\prime} }3{{{Σ }}}{{u}}-), {{{N}}}2({{{A}}}3{{{Σ }}}{{u}}+) and N(2D) excited nitrogen species leading to heat release are analyzed using numerical modeling in the framework of 1D axial approximation.

  15. Integrated presentation of ecological risk from multiple stressors

    PubMed Central

    Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman

    2016-01-01

    Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171

  16. Continuum theory of fibrous tissue damage mechanics using bond kinetics: application to cartilage tissue engineering.

    PubMed

    Nims, Robert J; Durney, Krista M; Cigan, Alexander D; Dusséaux, Antoine; Hung, Clark T; Ateshian, Gerard A

    2016-02-06

    This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process.

  17. Continuum theory of fibrous tissue damage mechanics using bond kinetics: application to cartilage tissue engineering

    PubMed Central

    Nims, Robert J.; Durney, Krista M.; Cigan, Alexander D.; Hung, Clark T.; Ateshian, Gerard A.

    2016-01-01

    This study presents a damage mechanics framework that employs observable state variables to describe damage in isotropic or anisotropic fibrous tissues. In this mixture theory framework, damage is tracked by the mass fraction of bonds that have broken. Anisotropic damage is subsumed in the assumption that multiple bond species may coexist in a material, each having its own damage behaviour. This approach recovers the classical damage mechanics formulation for isotropic materials, but does not appeal to a tensorial damage measure for anisotropic materials. In contrast with the classical approach, the use of observable state variables for damage allows direct comparison of model predictions to experimental damage measures, such as biochemical assays or Raman spectroscopy. Investigations of damage in discrete fibre distributions demonstrate that the resilience to damage increases with the number of fibre bundles; idealizing fibrous tissues using continuous fibre distribution models precludes the modelling of damage. This damage framework was used to test and validate the hypothesis that growth of cartilage constructs can lead to damage of the synthesized collagen matrix due to excessive swelling caused by synthesized glycosaminoglycans. Therefore, alternative strategies must be implemented in tissue engineering studies to prevent collagen damage during the growth process. PMID:26855751

  18. MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data

    PubMed Central

    Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong

    2015-01-01

    Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201

  19. Human Health Risk Assessment of Pharmaceuticals in Water: Issues and Challenges Ahead

    PubMed Central

    Kumar, Arun; Chang, Biao; Xagoraraki, Irene

    2010-01-01

    This study identified existing issues related to quantitative pharmaceutical risk assessment (QPhRA, hereafter) for pharmaceuticals in water and proposed possible solutions by analyzing methodologies and findings of different published QPhRA studies. Retrospective site-specific QPhRA studies from different parts of the world (U.S.A., United Kingdom, Europe, India, etc.) were reviewed in a structured manner to understand different assumptions, outcomes obtained and issues, identified/addressed/raised by the different QPhRA studies. Till date, most of the published studies have concluded that there is no appreciable risk to human health during environmental exposures of pharmaceuticals; however, attention is still required to following identified issues: (1) Use of measured versus predicted pharmaceutical concentration, (2) Identification of pharmaceuticals-of-concern and compounds needing special considerations, (3) Use of source water versus finished drinking water-related exposure scenarios, (4) Selection of representative exposure routes, (5) Valuation of uncertainty factors, and (6) Risk assessment for mixture of chemicals. To close the existing data and methodology gaps, this study proposed possible ways to address and/or incorporation these considerations within the QPhRA framework; however, more research work is still required to address issues, such as incorporation of short-term to long-term extrapolation and mixture effects in the QPhRA framework. Specifically, this study proposed a development of a new “mixture effects-related uncertainty factor” for mixture of chemicals (i.e., mixUFcomposite), similar to an uncertainty factor of a single chemical, within the QPhRA framework. In addition to all five traditionally used uncertainty factors, this uncertainty factor is also proposed to include concentration effects due to presence of different range of concentration levels of pharmaceuticals in a mixture. However, further work is required to determine values of all six uncertainty factors and incorporate them to use during estimation of point-of-departure values within the QPhRA framework. PMID:21139869

  20. Transformation and model choice for RNA-seq co-expression analysis.

    PubMed

    Rau, Andrea; Maugis-Rabusseau, Cathy

    2018-05-01

    Although a large number of clustering algorithms have been proposed to identify groups of co-expressed genes from microarray data, the question of if and how such methods may be applied to RNA sequencing (RNA-seq) data remains unaddressed. In this work, we investigate the use of data transformations in conjunction with Gaussian mixture models for RNA-seq co-expression analyses, as well as a penalized model selection criterion to select both an appropriate transformation and number of clusters present in the data. This approach has the advantage of accounting for per-cluster correlation structures among samples, which can be strong in RNA-seq data. In addition, it provides a rigorous statistical framework for parameter estimation, an objective assessment of data transformations and number of clusters and the possibility of performing diagnostic checks on the quality and homogeneity of the identified clusters. We analyze four varied RNA-seq data sets to illustrate the use of transformations and model selection in conjunction with Gaussian mixture models. Finally, we propose a Bioconductor package coseq (co-expression of RNA-seq data) to facilitate implementation and visualization of the recommended RNA-seq co-expression analyses.

  1. Conceptual design of distillation-based hybrid separation processes.

    PubMed

    Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang

    2013-01-01

    Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.

  2. Identification of Allelic Imbalance with a Statistical Model for Subtle Genomic Mosaicism

    PubMed Central

    Xia, Rui; Vattathil, Selina; Scheet, Paul

    2014-01-01

    Genetic heterogeneity in a mixed sample of tumor and normal DNA can confound characterization of the tumor genome. Numerous computational methods have been proposed to detect aberrations in DNA samples from tumor and normal tissue mixtures. Most of these require tumor purities to be at least 10–15%. Here, we present a statistical model to capture information, contained in the individual's germline haplotypes, about expected patterns in the B allele frequencies from SNP microarrays while fully modeling their magnitude, the first such model for SNP microarray data. Our model consists of a pair of hidden Markov models—one for the germline and one for the tumor genome—which, conditional on the observed array data and patterns of population haplotype variation, have a dependence structure induced by the relative imbalance of an individual's inherited haplotypes. Together, these hidden Markov models offer a powerful approach for dealing with mixtures of DNA where the main component represents the germline, thus suggesting natural applications for the characterization of primary clones when stromal contamination is extremely high, and for identifying lesions in rare subclones of a tumor when tumor purity is sufficient to characterize the primary lesions. Our joint model for germline haplotypes and acquired DNA aberration is flexible, allowing a large number of chromosomal alterations, including balanced and imbalanced losses and gains, copy-neutral loss-of-heterozygosity (LOH) and tetraploidy. We found our model (which we term J-LOH) to be superior for localizing rare aberrations in a simulated 3% mixture sample. More generally, our model provides a framework for full integration of the germline and tumor genomes to deal more effectively with missing or uncertain features, and thus extract maximal information from difficult scenarios where existing methods fail. PMID:25166618

  3. A general class of multinomial mixture models for anuran calling survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2005-01-01

    We propose a general framework for modeling anuran abundance using data collected from commonly used calling surveys. The data generated from calling surveys are indices of calling intensity (vocalization of males) that do not have a precise link to actual population size and are sensitive to factors that influence anuran behavior. We formulate a model for calling-index data in terms of the maximum potential calling index that could be observed at a site (the 'latent abundance class'), given its underlying breeding population, and we focus attention on estimating the distribution of this latent abundance class. A critical consideration in estimating the latent structure is imperfect detection, which causes the observed abundance index to be less than or equal to the latent abundance class. We specify a multinomial sampling model for the observed abundance index that is conditional on the latent abundance class. Estimation of the latent abundance class distribution is based on the marginal likelihood of the index data, having integrated over the latent class distribution. We apply the proposed modeling framework to data collected as part of the North American Amphibian Monitoring Program (NAAMP).

  4. Metal-organic frameworks for Xe/Kr separation

    DOEpatents

    Ryan, Patrick J.; Farha, Omar K.; Broadbelt, Linda J.; Snurr, Randall Q.; Bae, Youn-Sang

    2014-07-22

    Metal-organic framework (MOF) materials are provided and are selectively adsorbent to xenon (Xe) over another noble gas such as krypton (Kr) and/or argon (Ar) as a result of having framework voids (pores) sized to this end. MOF materials having pores that are capable of accommodating a Xe atom but have a small enough pore size to receive no more than one Xe atom are desired to preferentially adsorb Xe over Kr in a multi-component (Xe--Kr mixture) adsorption method. The MOF material has 20% or more, preferably 40% or more, of the total pore volume in a pore size range of 0.45-0.75 nm which can selectively adsorb Xe over Kr in a multi-component Xe--Kr mixture over a pressure range of 0.01 to 1.0 MPa.

  5. Metal-organic frameworks for Xe/Kr separation

    DOEpatents

    Ryan, Patrick J.; Farha, Omar K.; Broadbelt, Linda J.; Snurr, Randall Q.; Bae, Youn-Sang

    2013-08-27

    Metal-organic framework (MOF) materials are provided and are selectively adsorbent to xenon (Xe) over another noble gas such as krypton (Kr) and/or argon (Ar) as a result of having framework voids (pores) sized to this end. MOF materials having pores that are capable of accommodating a Xe atom but have a small enough pore size to receive no more than one Xe atom are desired to preferentially adsorb Xe over Kr in a multi-component (Xe--Kr mixture) adsorption method. The MOF material has 20% or more, preferably 40% or more, of the total pore volume in a pore size range of 0.45-0.75 nm which can selectively adsorb Xe over Kr in a multi-component Xe--Kr mixture over a pressure range of 0.01 to 1.0 MPa.

  6. Evaluation of the impact of H2O, O2, and SO2 on postcombustion CO2 capture in metal-organic frameworks.

    PubMed

    Yu, Jiamei; Ma, Yuguang; Balbuena, Perla B

    2012-05-29

    Molecular modeling methods are used to estimate the influence of impurity species: water, O(2), and SO(2) in flue gas mixtures present in postcombustion CO(2) capture using a metal organic framework, HKUST-1, as a model sorbent material. Coordinated and uncoordinated water effects on CO(2) capture are analyzed. Increase of CO(2) adsorption is observed for both cases, which can be attributed to the enhanced binding energy between CO(2) and HKUST-1 due to the introduction of a small amount of water. Density functional theory calculations indicate that the binding energy between CO(2) and HKUST-1 with coordinated water is ~1 kcal/mol higher than that without coordinated water. It is found that the improvement of CO(2)/N(2) selectivity induced by coordinated water may mainly be attributed to the increased CO(2) adsorption on the hydrated HKUST-1. On the other hand, the enhanced selectivity induced by uncoordinated water in the flue gas mixture can be explained on the basis of the competition of adsorption sites between water and CO(2) (N(2)). At low pressures, a significant CO(2)/N(2) selectivity increase is due to the increase of CO(2) adsorption and decrease of N(2) adsorption as a consequence of competition of adsorption sites between water and N(2). However, with more water molecules adsorbed at higher pressures, the competition between water and CO(2) leads to the decrease of CO(2) adsorption capacity. Therefore, high pressure operation should be avoided in HKUST-1 sorbents for CO(2) capture. In addition, the effects of O(2) and SO(2) on CO(2) capture in HKUST-1 are investigated: The CO(2)/N(2) selectivity does not change much even with relatively high concentrations of O(2) in the flue gas (up to 8%). A slightly lower CO(2)/N(2) selectivity of a CO(2)/N(2)/H(2)O/SO(2) mixture is observed compared with that in a CO(2)/N(2)/H(2)O mixture, especially at high pressures, due to the strong SO(2) binding with HKUST-1.

  7. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  8. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    PubMed

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  9. EuroForMix: An open source software based on a continuous model to evaluate STR DNA profiles from a mixture of contributors with artefacts.

    PubMed

    Bleka, Øyvind; Storvik, Geir; Gill, Peter

    2016-03-01

    We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data

    PubMed Central

    Su, Li; Farewell, Vernon T

    2013-01-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470

  12. Climate change adaptation frameworks: an evaluation of plans for coastal Suffolk, UK

    NASA Astrophysics Data System (ADS)

    Armstrong, J.; Wilby, R.; Nicholls, R. J.

    2015-11-01

    This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: scenario-led (SL), vulnerability-led (VL) and decision-centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis of metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation of implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid, showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.

  13. Climate change adaptation frameworks: an evaluation of plans for coastal, Suffolk, UK

    NASA Astrophysics Data System (ADS)

    Armstrong, J.; Wilby, R.; Nicholls, R. J.

    2015-06-01

    This paper asserts that three principal frameworks for climate change adaptation can be recognised in the literature: Scenario-Led (SL), Vulnerability-Led (VL) and Decision-Centric (DC) frameworks. A criterion is developed to differentiate these frameworks in recent adaptation projects. The criterion features six key hallmarks as follows: (1) use of climate model information; (2) analysis metrics/units; (3) socio-economic knowledge; (4) stakeholder engagement; (5) adaptation implementation mechanisms; (6) tier of adaptation implementation. The paper then tests the validity of this approach using adaptation projects on the Suffolk coast, UK. Fourteen adaptation plans were identified in an online survey. They were analysed in relation to the hallmarks outlined above and assigned to an adaptation framework. The results show that while some adaptation plans are primarily SL, VL or DC, the majority are hybrid showing a mixture of DC/VL and DC/SL characteristics. Interestingly, the SL/VL combination is not observed, perhaps because the DC framework is intermediate and attempts to overcome weaknesses of both SL and VL approaches. The majority (57 %) of adaptation projects generated a risk assessment or advice notes. Further development of this type of framework analysis would allow better guidance on approaches for organisations when implementing climate change adaptation initiatives, and other similar proactive long-term planning.

  14. Finite Element Implementation of Mechanochemical Phenomena in Neutral Deformable Porous Media Under Finite Deformation

    PubMed Central

    Ateshian, Gerard A.; Albro, Michael B.; Maas, Steve; Weiss, Jeffrey A.

    2011-01-01

    Biological soft tissues and cells may be subjected to mechanical as well as chemical (osmotic) loading under their natural physiological environment or various experimental conditions. The interaction of mechanical and chemical effects may be very significant under some of these conditions, yet the highly nonlinear nature of the set of governing equations describing these mechanisms poses a challenge for the modeling of such phenomena. This study formulated and implemented a finite element algorithm for analyzing mechanochemical events in neutral deformable porous media under finite deformation. The algorithm employed the framework of mixture theory to model the porous permeable solid matrix and interstitial fluid, where the fluid consists of a mixture of solvent and solute. A special emphasis was placed on solute-solid matrix interactions, such as solute exclusion from a fraction of the matrix pore space (solubility) and frictional momentum exchange that produces solute hindrance and pumping under certain dynamic loading conditions. The finite element formulation implemented full coupling of mechanical and chemical effects, providing a framework where material properties and response functions may depend on solid matrix strain as well as solute concentration. The implementation was validated using selected canonical problems for which analytical or alternative numerical solutions exist. This finite element code includes a number of unique features that enhance the modeling of mechanochemical phenomena in biological tissues. The code is available in the public domain, open source finite element program FEBio (http://mrl.sci.utah.edu/software). PMID:21950898

  15. Cumulative reproductive effects of in utero administration of mixtures of antiandrogens in male SD rats: synergy or additivity?

    EPA Science Inventory

    In 1996 the USEPA was charged under the FQPA to consider the cumulative effects of chemicals in their risk assessments. Our studies were conducted to provide a framework for assessing the cumulative effects of antiandrogens. Toxicants were administered individually or as mixtures...

  16. CUMULATIVE EFFECTS OF IN UTERO ADMINISTRATION OF A MIXTURE OF SEVEN ANTIANDROGENS ON MALE RAT REPRODUCTIVE DEVELOPMENT

    EPA Science Inventory

    Although risk assessments are typically conducted on a chemical-by-chemical basis, the 1996 FQPA requires the USEPA to consider cumulative risk from chemicals that act via a common mechanism of action. To this end, we are conducting studies with mixtures to provide a framework fo...

  17. Cumulative Effects of in Uetro Administration of Mixtures of "Antiandrogens" on Male Rat Reproductive Development

    EPA Science Inventory

    Although risk assessments are typically conducted on a chemical-by-chemical basis, the 1996 FQPA required the EPA to consider cumulative risk of chemicals that act via a common mechanism of toxicity. To this end, we are conducting studies with mixtures to provide a framework for ...

  18. Encoding the local connectivity patterns of fMRI for cognitive task and state classification.

    PubMed

    Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T

    2018-06-15

    In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.

  19. Coordinated Hard Sphere Mixture (CHaSM): A simplified model for oxide and silicate melts at mantle pressures and temperatures

    NASA Astrophysics Data System (ADS)

    Wolf, Aaron S.; Asimow, Paul D.; Stevenson, David J.

    2015-08-01

    We develop a new model to understand and predict the behavior of oxide and silicate melts at extreme temperatures and pressures, including deep mantle conditions like those in the early Earth magma ocean. The Coordinated Hard Sphere Mixture (CHaSM) is based on an extension of the hard sphere mixture model, accounting for the range of coordination states available to each cation in the liquid. By utilizing approximate analytic expressions for the hard sphere model, this method is capable of predicting complex liquid structure and thermodynamics while remaining computationally efficient, requiring only minutes of calculation time on standard desktop computers. This modeling framework is applied to the MgO system, where model parameters are trained on a collection of crystal polymorphs, producing realistic predictions of coordination evolution and the equation of state of MgO melt over a wide range of pressures and temperatures. We find that the typical coordination number of the Mg cation evolves continuously upward from 5.25 at 0 GPa to 8.5 at 250 GPa. The results produced by CHaSM are evaluated by comparison with predictions from published first-principles molecular dynamics calculations, indicating that CHaSM is accurately capturing the dominant physics controlling the behavior of oxide melts at high pressure. Finally, we present a simple quantitative model to explain the universality of the increasing Grüneisen parameter trend for liquids, which directly reflects their progressive evolution toward more compact solid-like structures upon compression. This general behavior is opposite that of solid materials, and produces steep adiabatic thermal profiles for silicate melts, thus playing a crucial role in magma ocean evolution.

  20. Colorometric detection of water using MOF-polymer films and composites

    DOEpatents

    Allendorf, Mark D.; Talin, Albert Alec

    2016-05-24

    A method including exposing a mixture of a porous metal organic framework (MOF) and a polymer to a predetermined molecular species, wherein the MOF has an open metal site for the predetermined molecular species and the polymer has a porosity for the predetermined molecular species; and detecting a color change of the MOF in the presence of the predetermined molecular species. A method including combining a porous metal organic framework (MOF) and a polymer, wherein the MOF has an open metal site for a predetermined molecular species and the polymer has a porosity for the predetermined molecular species. An article of manufacture including a mixture of a porous metal organic framework (MOF) and a polymer, wherein the MOF has an open metal site for a predetermined molecular species and the polymer has a porosity for the predetermined molecular species.

  1. A smooth mixture of Tobits model for healthcare expenditure.

    PubMed

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Effect of open metal sites on adsorption of polar and nonpolar molecules in metal-organic framework Cu-BTC.

    PubMed

    Karra, Jagadeswara R; Walton, Krista S

    2008-08-19

    Atomistic grand canonical Monte Carlo simulations were performed in this work to investigate the role of open copper sites of Cu-BTC in affecting the separation of carbon monoxide from binary mixtures containing methane, nitrogen, or hydrogen. Mixtures containing 5%, 50%, or 95% CO were examined. The simulations show that electrostatic interactions between the CO dipole and the partial charges on the metal-organic framework (MOF) atoms dominate the adsorption mechanism. The binary simulations show that Cu-BTC is quite selective for CO over hydrogen and nitrogen for all three mixture compositions at 298 K. The removal of CO from a 5% mixture with methane is slightly enhanced by the electrostatic interactions of CO with the copper sites. However, the pore space of Cu-BTC is large enough to accommodate both molecules at their pure-component loadings, and in general, Cu-BTC exhibits no significant selectivity for CO over methane for the equimolar and 95% mixtures. On the basis of the pure-component and low-concentration behavior of CO, the results indicate that MOFs with open metal sites have the potential for enhancing adsorption separations of molecules of differing polarities, but the pore size relative to the sorbate size will also play a significant role.

  3. A Multiscale Virtual Fabrication and Lattice Modeling Approach for the Fatigue Performance Prediction of Asphalt Concrete

    NASA Astrophysics Data System (ADS)

    Dehghan Banadaki, Arash

    Predicting the ultimate performance of asphalt concrete under realistic loading conditions is the main key to developing better-performing materials, designing long-lasting pavements, and performing reliable lifecycle analysis for pavements. The fatigue performance of asphalt concrete depends on the mechanical properties of the constituent materials, namely asphalt binder and aggregate. This dependent link between performance and mechanical properties is extremely complex, and experimental techniques often are used to try to characterize the performance of hot mix asphalt. However, given the seemingly uncountable number of mixture designs and loading conditions, it is simply not economical to try to understand and characterize the material behavior solely by experimentation. It is well known that analytical and computational modeling methods can be combined with experimental techniques to reduce the costs associated with understanding and characterizing the mechanical behavior of the constituent materials. This study aims to develop a multiscale micromechanical lattice-based model to predict cracking in asphalt concrete using component material properties. The proposed algorithm, while capturing different phenomena for different scales, also minimizes the need for laboratory experiments. The developed methodology builds on a previously developed lattice model and the viscoelastic continuum damage model to link the component material properties to the mixture fatigue performance. The resulting lattice model is applied to predict the dynamic modulus mastercurves for different scales. A framework for capturing the so-called structuralization effects is introduced that significantly improves the accuracy of the modulus prediction. Furthermore, air voids are added to the model to help capture this important micromechanical feature that affects the fatigue performance of asphalt concrete as well as the modulus value. The effects of rate dependency are captured by implementing the viscoelastic fracture criterion. In the end, an efficient cyclic loading framework is developed to evaluate the damage accumulation in the material that is caused by long-sustained cyclic loads.

  4. EFFECTS OF MIXTURES OF PHTHALATES, PESTICIDES AND TCDD ON SEXUAL DIFFERENTIATON IN RATS: A RISK FRAMEWORK BASED UPON DISRUPTION OF COMMON DEVELOPING SYSTEMS

    EPA Science Inventory

    Since humans are exposed to more than one chemical at a time, concern has arisen about the effects of mixtures of chemicals on human reproduction and development. We are conducting studies to determine the 1) classes of chemicals that disrupt sexual differentiation via different ...

  5. CUMMULATIVE EFFECTS OF ADMINISTRATION MIXTURES OF “ANTIANDROGENS” IN RATS: A NEW FRAMEWORK BASED UPON COMMON SYSTEMS RATHER THAN COMMON MECHANISMS

    EPA Science Inventory

    Since humans and wildlife are exposed to more than one chemical at a time, concern has arisen about the effects of complex mixtures on reproduction and development. To date, different regulatory groups have not yet developed consistent approaches to conducting assessments of the ...

  6. DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.

    PubMed

    Chen, Zhuo; Luo, Yi; Mesgarani, Nima

    2017-03-01

    Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.

  7. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures

    PubMed Central

    2010-01-01

    Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315

  8. Semi-NLO production of Higgs bosons in the framework of kt-factorization using KMR unintegrated parton distributions

    NASA Astrophysics Data System (ADS)

    Modarres, M.; Masouminia, M. R.; Aminzadeh Nik, R.; Hosseinkhani, H.; Olanj, N.

    2018-01-01

    The cross-section for the production of the Standard Model Higgs boson has been calculated using a mixture of LO and NLO partonic diagrams and the unintegrated parton distribution functions (UPDF) of the Kimber-Martin-Ryskin (KMR) from the kt-factorization framework. The UPDF are prepared using the phenomenological libraries of Martin-Motylinski-Harland Lang-Thorne (MMHT 2014). The results are compared against the existing experimental data from the CMS and the ATLAS collaborations and available pQCD calculation. It is shown that, while the present calculation is in agreement with the experimental data, it is comparable with the pQCD results. It is also concluded that the K-factor approximation is comparable with the semi-NLOkt-factorization predictions.

  9. Observation of quantum criticality with ultracold atoms in optical lattices

    NASA Astrophysics Data System (ADS)

    Zhang, Xibo

    As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.

  10. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data.

    PubMed

    Tom, Brian Dm; Su, Li; Farewell, Vernon T

    2016-10-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.

  11. Effect of clay content and mineralogy on frictional sliding behavior of simulated gouges: binary and ternary mixtures of quartz, illite, and montmorillonite

    USGS Publications Warehouse

    Tembe, Sheryl; Lockner, David A.; Wong, Teng-Fong

    2010-01-01

    We investigated the frictional sliding behavior of simulated quartz-clay gouges under stress conditions relevant to seismogenic depths. Conventional triaxial compression tests were conducted at 40 MPa effective normal stress on saturated saw cut samples containing binary and ternary mixtures of quartz, montmorillonite, and illite. In all cases, frictional strengths of mixtures fall between the end-members of pure quartz (strongest) and clay (weakest). The overall trend was a decrease in strength with increasing clay content. In the illite/quartz mixture the trend was nearly linear, while in the montmorillonite mixtures a sigmoidal trend with three strength regimes was noted. Microstructural observations were performed on the deformed samples to characterize the geometric attributes of shear localization within the gouge layers. Two micromechanical models were used to analyze the critical clay fractions for the two-regime transitions on the basis of clay porosity and packing of the quartz grains. The transition from regime 1 (high strength) to 2 (intermediate strength) is associated with the shift from a stress-supporting framework of quartz grains to a clay matrix embedded with disperse quartz grains, manifested by the development of P-foliation and reduction in Riedel shear angle. The transition from regime 2 (intermediate strength) to 3 (low strength) is attributed to the development of shear localization in the clay matrix, occurring only when the neighboring layers of quartz grains are separated by a critical clay thickness. Our mixture data relating strength degradation to clay content agree well with strengths of natural shear zone materials obtained from scientific deep drilling projects.

  12. SIMSWASTE-AD - A modelling framework for the environmental assessment of agricultural waste management strategies: Anaerobic digestion.

    PubMed

    Pardo, Guillermo; Moral, Raúl; Del Prado, Agustín

    2017-01-01

    On-farm anaerobic digestion (AD) has been promoted due to its improved environmental performance, which is based on a number of life cycle assessments (LCA). However, the influence of site-specific conditions and practices on AD performance is rarely captured in LCA studies and the effects on C and N cycles are often overlooked. In this paper, a new model for AD (SIMS WASTE-AD ) is described in full and tested against a selection of available measured data. Good agreement between modelled and measured values was obtained, reflecting the model capability to predict biogas production (r 2 =0.84) and N mineralization (r 2 =0.85) under a range of substrate mixtures and operational conditions. SIMS WASTE-AD was also used to simulate C and N flows and GHG emissions for a set of scenarios exploring different AD technology levels, feedstock mixtures and climate conditions. The importance of post-digestion emissions and its relationship with the AD performance have been stressed as crucial factors to reduce the net GHG emissions (-75%) but also to enhance digestate fertilizer potential (15%). Gas tight digestate storage with residual biogas collection is highly recommended (especially in temperate to warm climates), as well as those operational conditions that can improve the process efficiency on degrading VS (e.g. thermophilic range, longer hydraulic retention time). Beyond the effects on the manure management stage, SIMS WASTE-AD also aims to help account for potential effects of AD on other stages by providing the C and nutrient flows. While primarily designed to be applied within the SIMS DAIRY modelling framework, it can also interact with other models implemented in integrated approaches. Such system scope assessments are essential for stakeholders and policy makers in order to develop effective strategies for reducing GHG emissions and environmental issues in the agriculture sector. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  14. Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies

    NASA Astrophysics Data System (ADS)

    Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu

    2015-09-01

    Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.

  15. Methods for synthesizing microporous crystals and microporous crystal membranes

    DOEpatents

    Dutta, Prabir; Severance, Michael; Sun, Chenhu

    2017-02-07

    A method of making a microporous crystal material, comprising: a. forming a mixture comprising NaOH, water, and one or more of an aluminum source, a silicon source, and a phosphate source, whereupon the mixture forms a gel; b. heating the gel for a first time period, whereupon a first volume of water is removed from the gel and micoroporous crystal nuclei form, the nuclei having a framework; and c.(if a membrane is to be formed) applying the gel to a solid support seeded with microporous crystals having a framework that is the same as the framework of the nuclei; d. heating the gel for a second time period. during which a second volume of water is added to the gel; wherein the rate of addition of the second volume of water is between about 0.5 and about 2.0 fold the rate of removal of the first volume of water.

  16. Enceladus and Europa: How Does Hydrothermal Activity Begin at the Surface?

    NASA Technical Reports Server (NTRS)

    Matson, D. L.; Castillo-Rogez, J. C.; Johnson, T. V.; Lunine, J. I.; Davies, A. G.

    2011-01-01

    The question of how the surface hydrothermal activity (e.g., eruptive plumes and heat flow) is initiated can be addressed within the frame-work of our "Perrier Ocean" model. This model delivers the necessary heat and chemicals to support the heat flow and plumes observed by Cassini in Enceladus' South Polar Region. The model employs closed-loop circulation of water from a sub-surface ocean. The ocean is the main reservoir of heat and chemicals, including dissolved gases. As ocean water moves up toward the surface, pressure is re-duced and gases exsolve forming bubbles. This bub-bly mixture is less dense than the icy crust and the buoyant ocean-water mixture rises toward the surface. Near the surface, heat and chemicals, including some volatiles, are delivered to the chambers in which plumes form and also to shallow reservoirs that keep the surface ice "warm". (Plume operations, per se, are as described by Schmidt et al. and Postberg et al. and are adopted by us.) After transferring heat, the water cools, bubbles contract and dissolve, and the mixture is now relatively dense. It descends through cracks in the crust and returns to the ocean. Once the closed-loop circulation has started it is self-sustaining. Loss of water via the erupting plumes is relatively negligible compared to the amount needed to maintain the heat flow.We note that the activity described herein for the the "Perrier-Ocean" model could, a priori, apply to all small icy bodies that sheltered an interior ocean at some point in their history.

  17. Modeling structured population dynamics using data from unmarked individuals

    USGS Publications Warehouse

    Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew

    2014-01-01

    The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.

  18. PAINeT: An object-oriented software package for simulations of flow-field, transport coefficients and flux terms in non-equilibrium gas mixture flows

    NASA Astrophysics Data System (ADS)

    Istomin, V. A.

    2018-05-01

    The software package Planet Atmosphere Investigator of Non-equilibrium Thermodynamics (PAINeT) has been devel-oped for studying the non-equilibrium effects associated with electronic excitation, chemical reactions and ionization. These studies are necessary for modeling process in shock tubes, in high enthalpy flows, in nozzles or jet engines, in combustion and explosion processes, in modern plasma-chemical and laser technologies. The advantages and possibilities of the package implementation are stated. Within the framework of the package implementation, based on kinetic theory approximations (one-temperature and state-to-state approaches), calculations are carried out, and the limits of applicability of a simplified description of shock-heated air flows and any other mixtures chosen by the user are given. Using kinetic theory algorithms, a numerical calculation of the heat fluxes and relaxation terms can be performed, which is necessary for further comparison of engineering simulation with experi-mental data. The influence of state-to-state distributions over electronic energy levels on the coefficients of thermal conductivity, diffusion, heat fluxes and diffusion velocities of the components of various gas mixtures behind shock waves is studied. Using the software package the accuracy of different approximations of the kinetic theory of gases is estimated. As an example state-resolved atomic ionized mixture of N/N+/O/O+/e- is considered. It is shown that state-resolved diffusion coefficients of neutral and ionized species vary from level to level. Comparing results of engineering applications with those given by PAINeT, recommendations for adequate models selection are proposed.

  19. Removal of volatile organic compounds using amphiphilic cyclodextrin-coated polypropylene.

    PubMed

    Lumholdt, Ludmilla; Fourmentin, Sophie; Nielsen, Thorbjørn T; Larsen, Kim L

    2014-01-01

    Polypropylene nonwovens were functionalised using a self-assembled, amphiphilic cyclodextrin coating and the potential for water purification by removal of pollutants was studied. As benzene is one of the problematic compounds in the Water Framework Directive, six volatile organic compounds (benzene and five benzene-based substances) were chosen as model compounds. The compounds were tested as a mixture in order to provide a more realistic situation since the wastewater will be a complex mixture containing multiple pollutants. The volatile organic compounds are known to form stable inclusion complexes with cyclodextrins. Six different amphiphilic cyclodextrin derivatives were synthesised in order to elucidate whether or not the uptake abilities of the coating depend on the structure of the derivative. Headspace gas chromatography was used for quantification of the uptake exploiting the volatile nature of benzene and its derivatives. The capacity was shown to increase beyond the expected stoichiometries of guest-host complexes with ratios of up to 16:1.

  20. Thermal transitions, pseudogap behavior, and BCS-BEC crossover in Fermi-Fermi mixtures

    NASA Astrophysics Data System (ADS)

    Karmakar, Madhuparna

    2018-03-01

    We study the mass imbalanced Fermi-Fermi mixture within the framework of a two-dimensional lattice fermion model. Based on the thermodynamic and species-dependent quasiparticle behavior, we map out the finite-temperature phase diagram of this system and show that unlike the balanced Fermi superfluid, there are now two different pseudogap regimes as PG-I and PG-II. While within the PG-I regime both the fermionic species are pseudogapped, PG-II corresponds to the regime where pseudogap feature survives only in the light species. We believe that the single-particle spectral features that we discuss in this paper are observable through the species-resolved radio-frequency spectroscopy and momentum-resolved photoemission spectroscopy measurements on systems such as 6Li-40K mixture. We further investigate the interplay between the population and mass imbalances and report that at a fixed population imbalance, the BCS-BEC crossover in a Fermi-Fermi mixture would require a critical interaction (Uc) for the realization of the uniform superfluid state. The effect of imbalance in mass on the exotic Fulde-Ferrell-Larkin-Ovchinnikov superfluid phase has been probed in detail in terms of the thermodynamic and quasiparticle behavior of this phase. It has been observed that in spite of the s -wave symmetry of the pairing field, a nodal superfluid gap is realized in the Larkin-Ovchinnikov regime. Our results on the various thermal scales and regimes are expected to serve as benchmarks for the experimental observations on 6Li-40K mixture.

  1. Discovery of optimal zeolites for challenging separations and chemical transformations using predictive materials modeling

    NASA Astrophysics Data System (ADS)

    Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W.; Tsapatsis, Michael; Siepmann, J. Ilja

    2015-01-01

    Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.

  2. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less

  3. Lifetime Segmented Assimilation Trajectories and Health Outcomes in Latino and Other Community Residents

    PubMed Central

    Marsiglia, Flavio F.; Kulis, Stephen; Kellison, Joshua G.

    2010-01-01

    Objectives. Under an ecodevelopmental framework, we examined lifetime segmented assimilation trajectories (diverging assimilation pathways influenced by prior life conditions) and related them to quality-of-life indicators in a diverse sample of 258 men in the Pheonix, AZ, metropolitan area. Methods. We used a growth mixture model analysis of lifetime changes in socioeconomic status, and used acculturation to identify distinct lifetime segmented assimilation trajectory groups, which we compared on life satisfaction, exercise, and dietary behaviors. We hypothesized that lifetime assimilation change toward mainstream American culture (upward assimilation) would be associated with favorable health outcomes, and downward assimilation change with unfavorable health outcomes. Results. A growth mixture model latent class analysis identified 4 distinct assimilation trajectory groups. In partial support of the study hypotheses, the extreme upward assimilation trajectory group (the most successful of the assimilation pathways) exhibited the highest life satisfaction and the lowest frequency of unhealthy food consumption. Conclusions. Upward segmented assimilation is associated in adulthood with certain positive health outcomes. This may be the first study to model upward and downward lifetime segmented assimilation trajectories, and to associate these with life satisfaction, exercise, and dietary behaviors. PMID:20167890

  4. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  5. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  6. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  7. Protein construct storage: Bayesian variable selection and prediction with mixtures.

    PubMed

    Clyde, M A; Parmigiani, G

    1998-07-01

    Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.

  8. Hydrodynamics of bacterial colonies: A model

    NASA Astrophysics Data System (ADS)

    Lega, J.; Passot, T.

    2003-03-01

    We propose a hydrodynamic model for the evolution of bacterial colonies growing on soft agar plates. This model consists of reaction-diffusion equations for the concentrations of nutrients, water, and bacteria, coupled to a single hydrodynamic equation for the velocity field of the bacteria-water mixture. It captures the dynamics inside the colony as well as on its boundary and allows us to identify a mechanism for collective motion towards fresh nutrients, which, in its modeling aspects, is similar to classical chemotaxis. As shown in numerical simulations, our model reproduces both usual colony shapes and typical hydrodynamic motions, such as the whirls and jets recently observed in wet colonies of Bacillus subtilis. The approach presented here could be extended to different experimental situations and provides a general framework for the use of advection-reaction-diffusion equations in modeling bacterial colonies.

  9. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. A Computational Algorithm for Functional Clustering of Proteome Dynamics During Development

    PubMed Central

    Wang, Yaqun; Wang, Ningtao; Hao, Han; Guo, Yunqian; Zhen, Yan; Shi, Jisen; Wu, Rongling

    2014-01-01

    Phenotypic traits, such as seed development, are a consequence of complex biochemical interactions among genes, proteins and metabolites, but the underlying mechanisms that operate in a coordinated and sequential manner remain elusive. Here, we address this issue by developing a computational algorithm to monitor proteome changes during the course of trait development. The algorithm is built within the mixture-model framework in which each mixture component is modeled by a specific group of proteins that display a similar temporal pattern of expression in trait development. A nonparametric approach based on Legendre orthogonal polynomials was used to fit dynamic changes of protein expression, increasing the power and flexibility of protein clustering. By analyzing a dataset of proteomic dynamics during early embryogenesis of the Chinese fir, the algorithm has successfully identified several distinct types of proteins that coordinate with each other to determine seed development in this forest tree commercially and environmentally important to China. The algorithm will find its immediate applications for the characterization of mechanistic underpinnings for any other biological processes in which protein abundance plays a key role. PMID:24955031

  11. NIRP Core Software Suite v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitener, Dustin Heath; Folz, Wesley; Vo, Duong

    The NIRP Core Software Suite is a core set of code that supports multiple applications. It includes miscellaneous base code for data objects, mathematic equations, and user interface components; and the framework includes several fully-developed software applications that exist as stand-alone tools to compliment other applications. The stand-alone tools are described below. Analyst Manager: An application to manage contact information for people (analysts) that use the software products. This information is often included in generated reports and may be used to identify the owners of calculations. Radionuclide Viewer: An application for viewing the DCFPAK radiological data. Compliments the Mixture Managermore » tool. Mixture Manager: An application to create and manage radionuclides mixtures that are commonly used in other applications. High Explosive Manager: An application to manage explosives and their properties. Chart Viewer: An application to view charts of data (e.g. meteorology charts). Other applications may use this framework to create charts specific to their data needs.« less

  12. Recent Advances on Bioethanol Dehydration using Zeolite Membrane

    NASA Astrophysics Data System (ADS)

    Makertihartha, I. G. B. N.; Dharmawijaya, P. T.; Wenten, I. G.

    2017-07-01

    Renewable energy has gained increasing attention throughout the world. Bioethanol has the potential to replace existing fossil fuel usage without much modification in existing facilities. Bioethanol which generally produced from fermentation route produces low ethanol concentration. However, fuel grade ethanol requires low water content to avoid engine stall. Dehydration process has been increasingly important in fuel grade ethanol production. Among all dehydration processes, pervaporation is considered as the most promising technology. Zeolite possesses high potential in pervaporation of bioethanol into fuel grade ethanol. Zeolite membrane can either remove organic (ethanol) from aqueous mixture or water from the mixture, depending on the framework used. Hydrophilic zeolite membrane, e.g. LTA, can easily remove water from the mixture leaving high ethanol concentration. On the other hand, hydrophobic zeolite membrane, e.g. silicate-1, can remove ethanol from aqueous solution. This review presents the concept of bioethanol dehydration using zeolite membrane. Special attention is given to the performance of selected pathway related to framework selection.

  13. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    PubMed

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  14. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  15. Simultaneous Force Regression and Movement Classification of Fingers via Surface EMG within a Unified Bayesian Framework.

    PubMed

    Baldacchino, Tara; Jacobs, William R; Anderson, Sean R; Worden, Keith; Rowson, Jennifer

    2018-01-01

    This contribution presents a novel methodology for myolectric-based control using surface electromyographic (sEMG) signals recorded during finger movements. A multivariate Bayesian mixture of experts (MoE) model is introduced which provides a powerful method for modeling force regression at the fingertips, while also performing finger movement classification as a by-product of the modeling algorithm. Bayesian inference of the model allows uncertainties to be naturally incorporated into the model structure. This method is tested using data from the publicly released NinaPro database which consists of sEMG recordings for 6 degree-of-freedom force activations for 40 intact subjects. The results demonstrate that the MoE model achieves similar performance compared to the benchmark set by the authors of NinaPro for finger force regression. Additionally, inherent to the Bayesian framework is the inclusion of uncertainty in the model parameters, naturally providing confidence bounds on the force regression predictions. Furthermore, the integrated clustering step allows a detailed investigation into classification of the finger movements, without incurring any extra computational effort. Subsequently, a systematic approach to assessing the importance of the number of electrodes needed for accurate control is performed via sensitivity analysis techniques. A slight degradation in regression performance is observed for a reduced number of electrodes, while classification performance is unaffected.

  16. Financial Data Analysis by means of Coupled Continuous-Time Random Walk in Rachev-Rűschendorf Model

    NASA Astrophysics Data System (ADS)

    Jurlewicz, A.; Wyłomańska, A.; Żebrowski, P.

    2008-09-01

    We adapt the continuous-time random walk formalism to describe asset price evolution. We expand the idea proposed by Rachev and Rűschendorf who analyzed the binomial pricing model in the discrete time with randomization of the number of price changes. As a result, in the framework of the proposed model we obtain a mixture of the Gaussian and a generalized arcsine laws as the limiting distribution of log-returns. Moreover, we derive an European-call-option price that is an extension of the Black-Scholes formula. We apply the obtained theoretical results to model actual financial data and try to show that the continuous-time random walk offers alternative tools to deal with several complex issues of financial markets.

  17. Using hierarchical Bayesian multi-species mixture models to estimate tandem hoop-net based habitat associations and detection probabilities of fishes in reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.

  18. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  19. Confined wetting of FoCa clay powder/pellet mixtures: Experimentation and numerical modeling

    NASA Astrophysics Data System (ADS)

    Maugis, Pascal; Imbert, Christophe

    Potential geological nuclear waste disposals must be properly sealed to prevent contamination of the biosphere by radionuclides. In the framework of the RESEAL project, the performance of a bentonite shaft seal is currently studied at Mol (Belgium). This paper focuses on the hydro-mechanical physical behavior of centimetric, unsaturated samples of the backfilling material - a mixture of FoCa-clay powder and pellets - during oedometer tests. The hydro-mechanical response of the samples is observed experimentally, and then compared to numerical simulations performed by our Cast3M Finite Element code. The generalized Darcy’s law and the Barcelona Basic Model mechanical model formed the physical basis of the numerical model and the interpretation. They are widely used in engineered barriers modeling. Vertical swelling pressure and water intake were measured throughout the test. Although water income presents a monotonous increase, the swelling pressure evolution is marked by a peak, and then a local minimum before increasing again to an asymptotic value. This unexpected behavior is explained by yielding rather than by heterogeneity. It is satisfactorily reproduced by the model after parameter calibration. Several samples with different heights ranging from 5 to 12 cm show the same hydro-mechanical response, apart from a dilatation of the time scale. The interest of the characterization of centimetric samples to predicting the efficiency of a metric sealing is discussed.

  20. Study of biological communities subject to imperfect detection: Bias and precision of community N-mixture abundance models in small-sample situations

    USGS Publications Warehouse

    Yamaura, Yuichi; Kery, Marc; Royle, Andy

    2016-01-01

    Community N-mixture abundance models for replicated counts provide a powerful and novel framework for drawing inferences related to species abundance within communities subject to imperfect detection. To assess the performance of these models, and to compare them to related community occupancy models in situations with marginal information, we used simulation to examine the effects of mean abundance (λ¯: 0.1, 0.5, 1, 5), detection probability (p¯: 0.1, 0.2, 0.5), and number of sampling sites (n site : 10, 20, 40) and visits (n visit : 2, 3, 4) on the bias and precision of species-level parameters (mean abundance and covariate effect) and a community-level parameter (species richness). Bias and imprecision of estimates decreased when any of the four variables (λ¯, p¯, n site , n visit ) increased. Detection probability p¯ was most important for the estimates of mean abundance, while λ¯ was most influential for covariate effect and species richness estimates. For all parameters, increasing n site was more beneficial than increasing n visit . Minimal conditions for obtaining adequate performance of community abundance models were n site  ≥ 20, p¯ ≥ 0.2, and λ¯ ≥ 0.5. At lower abundance, the performance of community abundance and community occupancy models as species richness estimators were comparable. We then used additive partitioning analysis to reveal that raw species counts can overestimate β diversity both of species richness and the Shannon index, while community abundance models yielded better estimates. Community N-mixture abundance models thus have great potential for use with community ecology or conservation applications provided that replicated counts are available.

  1. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  2. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  3. Stability of smectic phases in hard-rod mixtures

    NASA Astrophysics Data System (ADS)

    Martínez-Ratón, Yuri; Velasco, Enrique; Mederos, Luis

    2005-09-01

    Using density-functional theory, we have analyzed the phase behavior of binary mixtures of hard rods of different lengths and diameters. Previous studies have shown a strong tendency of smectic phases of these mixtures to segregate and, in some circumstances, to form microsegregated phases. Our focus in the present work is on the formation of columnar phases which some studies, under some approximations, have shown to become thermodynamically stable prior to crystallization. Specifically we focus on the relative stability between smectic and columnar phases, a question not fully addressed in previous work. Our analysis is based on two complementary perspectives: on the one hand, an extended Onsager theory, which includes the full orientational degrees of freedom but with spatial and orientational correlations being treated in an approximate manner; on the other hand, we formulate a Zwanzig approximation of fundamental-measure theory on hard parallelepipeds, whereby orientations are restricted to be only along three mutually orthogonal axes, but correlations are faithfully represented. In the latter case novel, complete phase diagrams containing regions of stability of liquid-crystalline phases are calculated. Our findings indicate that the restricted-orientation approximation enhances the stability of columnar phases so as to preempt smectic order completely while, in the framework of the extended Onsager model, with full orientational degrees of freedom taken into account, columnar phases may preempt a large region of smectic stability in some mixtures, but some smectic order still persists.

  4. Turbulent flame spreading mechanisms after spark ignition

    NASA Astrophysics Data System (ADS)

    Subramanian, V.; Domingo, Pascale; Vervisch, Luc

    2009-12-01

    Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.

  5. Systematic description of the effect of particle shape on the strength properties of granular media

    NASA Astrophysics Data System (ADS)

    Azéma, Emilien; Estrada, Nicolas; Preechawuttipong, Itthichai; Delenne, Jean-Yves; Radjai, Farhang

    2017-06-01

    In this paper, we explore numerically the effect of particle shape on the mechanical behavior of sheared granular packings. In the framework of the Contact Dynamic (CD)Method, we model angular shape as irregular polyhedral particles, non-convex shape as regular aggregates of four overlapping spheres, elongated shape as rounded cap rectangles and platy shape as square-plates. Binary granular mixture consisting of disks and elongated particles are also considered. For each above situations, the number of face of polyhedral particles, the overlap of spheres, the aspect ratio of elongated and platy particles, are systematically varied from spheres to very angular, non-convex, elongated and platy shapes. The level of homogeneity of binary mixture varies from homogenous packing to fully segregated packings. Our numerical results suggest that the effects of shape parameters are nonlinear and counterintuitive. We show that the shear strength increases as shape deviate from spherical shape. But, for angular shapes it first increases up to a maximum value and then saturates to a constant value as the particles become more angular. For mixture of two shapes, the strength increases with respect of the increase of the proportion of elongated particles, but surprisingly it is independent with the level of homogeneity of the mixture. A detailed analysis of the contact network topology, evidence that various contact types contribute differently to stress transmission at the micro-scale.

  6. A charge-polarized porous metal-organic framework for gas chromatographic separation of alcohols from water.

    PubMed

    Sun, Jian-Ke; Ji, Min; Chen, Cheng; Wang, Wu-Gen; Wang, Peng; Chen, Rui-Ping; Zhang, Jie

    2013-02-25

    A bipyridinium ligand with a charge separated skeleton has been introduced into a metal-organic framework to yield a porous material with charge-polarized pore space, which exhibits selective adsorption for polar guest molecules and can be further used in gas chromatography for the separation of alcohol-water mixtures.

  7. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  8. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  9. Designing a robust activity recognition framework for health and exergaming using wearable sensors.

    PubMed

    Alshurafa, Nabil; Xu, Wenyao; Liu, Jason J; Huang, Ming-Chun; Mortazavi, Bobak; Roberts, Christian K; Sarrafzadeh, Majid

    2014-09-01

    Detecting human activity independent of intensity is essential in many applications, primarily in calculating metabolic equivalent rates and extracting human context awareness. Many classifiers that train on an activity at a subset of intensity levels fail to recognize the same activity at other intensity levels. This demonstrates weakness in the underlying classification method. Training a classifier for an activity at every intensity level is also not practical. In this paper, we tackle a novel intensity-independent activity recognition problem where the class labels exhibit large variability, the data are of high dimensionality, and clustering algorithms are necessary. We propose a new robust stochastic approximation framework for enhanced classification of such data. Experiments are reported using two clustering techniques, K-Means and Gaussian Mixture Models. The stochastic approximation algorithm consistently outperforms other well-known classification schemes which validate the use of our proposed clustered data representation. We verify the motivation of our framework in two applications that benefit from intensity-independent activity recognition. The first application shows how our framework can be used to enhance energy expenditure calculations. The second application is a novel exergaming environment aimed at using games to reward physical activity performed throughout the day, to encourage a healthy lifestyle.

  10. Modeling of dielectric properties of aqueous salt solutions with an equation of state.

    PubMed

    Maribo-Mogensen, Bjørn; Kontogeorgis, Georgios M; Thomsen, Kaj

    2013-09-12

    The static permittivity is the most important physical property for thermodynamic models that account for the electrostatic interactions between ions. The measured static permittivity in mixtures containing electrolytes is reduced due to kinetic depolarization and reorientation of the dipoles in the electrical field surrounding ions. Kinetic depolarization may explain 25-75% of the observed decrease in the permittivity of solutions containing salts, but since this is a dynamic property, this effect should not be included in the thermodynamic modeling of electrolytes. Kinetic depolarization has, however, been ignored in relation to thermodynamic modeling, and authors have either neglected the effect of salts on permittivity or used empirical correlations fitted to the measured static permittivity, leading to an overestimation of the reduction in the thermodynamic static permittivity. We present a new methodology for obtaining the static permittivity over wide ranges of temperatures, pressures, and compositions for use within an equation of state for mixed solvents containing salts. The static permittivity is calculated from a new extension of the framework developed by Onsager, Kirkwood, and Fröhlich to associating mixtures. Wertheim's association model as formulated in the statistical associating fluid theory is used to account for hydrogen-bonding molecules and ion-solvent association. Finally, we compare the Debye-Hückel Helmholtz energy obtained using an empirical model with the new physical model and show that the empirical models may introduce unphysical behavior in the equation of state.

  11. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  12. Effect of air humidity on the removal of carbon tetrachloride from air using Cu-BTC metal-organic framework.

    PubMed

    Martín-Calvo, Ana; García-Pérez, Elena; García-Sánchez, Almudena; Bueno-Pérez, Rocío; Hamad, Said; Calero, Sofia

    2011-06-21

    We have used interatomic potential-based simulations to study the removal of carbon tetrachloride from air at 298 K, using Cu-BTC metal organic framework. We have developed new sets of Lennard-Jones parameters that accurately describe the vapour-liquid equilibrium curves of carbon tetrachloride and the main components from air (oxygen, nitrogen, and argon). Using these parameters we performed Monte Carlo simulations for the following systems: (a) single component adsorption of carbon tetrachloride, oxygen, nitrogen, and argon molecules, (b) binary Ar/CCl(4), O(2)/CCl(4), and N(2)/CCl(4) mixtures with bulk gas compositions 99 : 1 and 99.9 : 0.1, (c) ternary O(2)/N(2)/Ar mixtures with both, equimolar and 21 : 78 : 1 bulk gas composition, (d) quaternary mixture formed by 0.1% of CCl(4) pollutant, 20.979% O(2), 77.922% N(2), and 0.999% Ar, and (e) five-component mixtures corresponding to 0.1% of CCl(4) pollutant in air with relative humidity ranging from 0 to 100%. The carbon tetrachloride adsorption selectivity and the self-diffusivity and preferential sitting of the different molecules in the structure are studied for all the systems.

  13. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  14. Deformation of debris-ice mixtures

    NASA Astrophysics Data System (ADS)

    Moore, Peter L.

    2014-09-01

    Mixtures of rock debris and ice are common in high-latitude and high-altitude environments and are thought to be widespread elsewhere in our solar system. In the form of permafrost soils, glaciers, and rock glaciers, these debris-ice mixtures are often not static but slide and creep, generating many of the landforms and landscapes associated with the cryosphere. In this review, a broad range of field observations, theory, and experimental work relevant to the mechanical interactions between ice and rock debris are evaluated, with emphasis on the temperature and stress regimes common in terrestrial surface and near-surface environments. The first-order variables governing the deformation of debris-ice mixtures in these environments are debris concentration, particle size, temperature, solute concentration (salinity), and stress. A key observation from prior studies, consistent with expectations, is that debris-ice mixtures are usually more resistant to deformation at low temperatures than their pure end-member components. However, at temperatures closer to melting, the growth of unfrozen water films at ice-particle interfaces begins to reduce the strengthening effect and can even lead to profound weakening. Using existing quantitative relationships from theoretical and experimental work in permafrost engineering, ice mechanics, and glaciology combined with theory adapted from metallurgy and materials science, a simple constitutive framework is assembled that is capable of capturing most of the observed dynamics. This framework highlights the competition between the role of debris in impeding ice creep and the mitigating effects of unfrozen water at debris-ice interfaces.

  15. Generation of a mixture model ground-motion prediction equation for Northern Chile

    NASA Astrophysics Data System (ADS)

    Haendel, A.; Kuehn, N. M.; Scherbaum, F.

    2012-12-01

    In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.

  16. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  17. Sedimentation dynamics and equilibrium profiles in multicomponent mixtures of colloidal particles.

    PubMed

    Spruijt, E; Biesheuvel, P M

    2014-02-19

    In this paper we give a general theoretical framework that describes the sedimentation of multicomponent mixtures of particles with sizes ranging from molecules to macroscopic bodies. Both equilibrium sedimentation profiles and the dynamic process of settling, or its converse, creaming, are modeled. Equilibrium profiles are found to be in perfect agreement with experiments. Our model reconciles two apparently contradicting points of view about buoyancy, thereby resolving a long-lived paradox about the correct choice of the buoyant density. On the one hand, the buoyancy force follows necessarily from the suspension density, as it relates to the hydrostatic pressure gradient. On the other hand, sedimentation profiles of colloidal suspensions can be calculated directly using the fluid density as apparent buoyant density in colloidal systems in sedimentation-diffusion equilibrium (SDE) as a result of balancing gravitational and thermodynamic forces. Surprisingly, this balance also holds in multicomponent mixtures. This analysis resolves the ongoing debate of the correct choice of buoyant density (fluid or suspension): both approaches can be used in their own domain. We present calculations of equilibrium sedimentation profiles and dynamic sedimentation that show the consequences of these insights. In bidisperse mixtures of colloids, particles with a lower mass density than the homogeneous suspension will first cream and then settle, whereas particles with a suspension-matched mass density form transient, bimodal particle distributions during sedimentation, which disappear when equilibrium is reached. In all these cases, the centers of the distributions of the particles with the lowest mass density of the two, regardless of their actual mass, will be located in equilibrium above the so-called isopycnic point, a natural consequence of their hard-sphere interactions. We include these interactions using the Boublik-Mansoori-Carnahan-Starling-Leland (BMCSL) equation of state. Finally, we demonstrate that our model is not limited to hard spheres, by extending it to charged spherical particles, and to dumbbells, trimers and short chains of connected beads.

  18. The Effects of Stress State on the Strain Hardening Behaviors of TWIP Steel

    NASA Astrophysics Data System (ADS)

    Liu, F.; Dan, W. J.; Zhang, W. G.

    2017-05-01

    Twinning-Induced Plasticity (TWIP) steels have received great attention due to their excellent mechanical properties as a result of austenite twinning during straining. In this paper, the effects of stress state on the strain hardening behaviors of Fe-20Mn-1.2C TWIP steel were studied. A twinning model considering stress state was presented based on the shear-band framework, and a strain hardening model was proposed by taking dislocation mixture evolution into account. The models were verified by the experimental results of uniaxial tension, simple shear and rolling processes. The strain hardening behaviors of TWIP steel under different stress states were predicted. The results show that the stress state can improve the austenite twining and benefit the strain hardening of TWIP steel.

  19. Risk Assessment in the 21st Century | Science Inventory | US ...

    EPA Pesticide Factsheets

    For the past ~50 years, risk assessment depended almost exclusively on animal testing for hazard identification and dose-response assessment. Originally sound and effective, with increasing dependence on chemical tools and the number of chemicals in commerce, this traditional approach is no longer adequate. This presentation provides an update on current progress in achieving the goals outlined in the NAS report on Toxicology Testing in the 21st Century, highlighting many of the advances lead by the EPA. Topics covered include the evolution of the mode of action framework into a chemically agnostic, adverse outcome pathway (AOP), a systems-based data framework that facilitates integration of modifiable factors (e.g., genetic variation, life stages), and an understanding of networks, and mixtures. Further, the EDSP pivot is used to illustrate how AOPs drive development of predictive models for risk assessment based on assembly of high throughput assays representing AOP key elements. The birth of computational exposure science, capable of large-scale predictive exposure models, is reviewed. Although still in its infancy, development of non-targeted analysis to begin addressing exposome also is presented. Finally, the systems-based AEP is described that integrates exposure, toxicokinetics and AOPs into a comprehensive framework. For the past ~50 years, risk assessment depended almost exclusively on animal testing for hazard identification and dose-response as

  20. SI-traceable and dynamic reference gas mixtures for water vapour at polar and high troposphere atmospheric levels

    NASA Astrophysics Data System (ADS)

    Guillevic, Myriam; Pascale, Céline; Mutter, Daniel; Wettstein, Sascha; Niederhauser, Bernhard

    2017-04-01

    In the framework of METAS' AtmoChem-ECV project, new facilities are currently being developed to generate reference gas mixtures for water vapour at concentrations measured in the high troposphere and polar regions, in the range 1-20 µmol/mol (ppm). The generation method is dynamic (the mixture is produced continuously over time) and SI-traceable (i.e. the amount of substance fraction in mole per mole is traceable to the definition of SI-units). The generation process is composed of three successive steps. The first step is to purify the matrix gas, nitrogen or synthetic air. Second, this matrix gas is spiked with the pure substance using a permeation technique: a permeation device contains a few grams of pure water in liquid form and loses it linearly over time by permeation through a membrane. In a third step, to reach the desired concentration, the first, high concentration mixture exiting the permeation chamber is then diluted with a chosen flow of matrix gas with one or two subsequent dilution steps. All flows are piloted by mass flow controllers. All parts in contact with the gas mixture are passivated using coated surfaces, to reduce adsorption/desorption processes as much as possible. The mixture can eventually be directly used to calibrate an analyser. The standard mixture produced by METAS' dynamic setup was injected into a chilled mirror from MBW Calibration AG, the designated institute for absolute humidity calibration in Switzerland. The used chilled mirror, model 373LX, is able to measure frost point and sample pressure and therefore calculate the water vapour concentration. This intercomparison of the two systems was performed in the range 4-18 ppm water vapour in synthetic air, at two different pressure levels, 1013.25 hPa and 2000 hPa. We present here METAS' dynamic setup, its uncertainty budget and the first results of the intercomparison with MBW's chilled mirror.

  1. Strangeness driven phase transitions in compressed baryonic matter and their relevance for neutron stars and core collapsing supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raduta, Ad. R.; Gulminelli, F.; Oertel, M.

    2015-02-24

    We discuss the thermodynamics of compressed baryonic matter with strangeness within non-relativistic mean-field models with effective interactions. The phase diagram of the full baryonic octet under strangeness equilibrium is built and discussed in connection with its relevance for core-collapse supernovae and neutron stars. A simplified framework corresponding to (n, p, Λ)(+e)-mixtures is employed in order to test the sensitivity of the existence of a phase transition on the (poorely constrained) interaction coupling constants and the compatibility between important hyperonic abundances and 2M{sub ⊙} neutron stars.

  2. Modeling of equilibrium hollow objects stabilized by electrostatics.

    PubMed

    Mani, Ethayaraja; Groenewold, Jan; Kegel, Willem K

    2011-05-18

    The equilibrium size of two largely different kinds of hollow objects behave qualitatively differently with respect to certain experimental conditions. Yet, we show that they can be described within the same theoretical framework. The objects we consider are 'minivesicles' of ionic and nonionic surfactant mixtures, and shells of Keplerate-type polyoxometalates. The finite-size of the objects in both systems is manifested by electrostatic interactions. We emphasize the importance of constant charge and constant potential boundary conditions. Taking these conditions into account, indeed, leads to the experimentally observed qualitatively different behavior of the equilibrium size of the objects.

  3. Incremental Learning With Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

    PubMed Central

    Gao, Yaozong; Zhan, Yiqiang

    2015-01-01

    Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983

  4. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  5. Modelling carotid artery adaptations to dynamic alterations in pressure and flow over the cardiac cycle

    PubMed Central

    Cardamone, L.; Valentín, A.; Eberth, J. F.; Humphrey, J. D.

    2010-01-01

    Motivated by recent clinical and laboratory findings of important effects of pulsatile pressure and flow on arterial adaptations, we employ and extend an established constrained mixture framework of growth (change in mass) and remodelling (change in structure) to include such dynamical effects. New descriptors of cell and tissue behavior (constitutive relations) are postulated and refined based on new experimental data from a transverse aortic arch banding model in the mouse that increases pulsatile pressure and flow in one carotid artery. In particular, it is shown that there was a need to refine constitutive relations for the active stress generated by smooth muscle, to include both stress- and stress rate-mediated control of the turnover of cells and matrix and to account for a cyclic stress-mediated loss of elastic fibre integrity and decrease in collagen stiffness in order to capture the reported evolution, over 8 weeks, of luminal radius, wall thickness, axial force and in vivo axial stretch of the hypertensive mouse carotid artery. We submit, therefore, that complex aspects of adaptation by elastic arteries can be predicted by constrained mixture models wherein individual constituents are produced or removed at individual rates and to individual extents depending on changes in both stress and stress rate from normal values. PMID:20484365

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Siqin; Department of Chemistry, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon; Sheong, Fu Kit

    Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achievemore » this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute.« less

  7. Chemistry of decomposition of freshwater wetland sedimentary organic material during ramped pyrolysis

    NASA Astrophysics Data System (ADS)

    Williams, E. K.; Rosenheim, B. E.

    2011-12-01

    Ramped pyrolysis methodology, such as that used in the programmed-temperature pyrolysis/combustion system (PTP/CS), improves radiocarbon analysis of geologic materials devoid of authigenic carbonate compounds and with low concentrations of extractable authochthonous organic molecules. The approach has improved sediment chronology in organic-rich sediments proximal to Antarctic ice shelves (Rosenheim et al., 2008) and constrained the carbon sequestration potential of suspended sediments in the lower Mississippi River (Roe et al., in review). Although ramped pyrolysis allows for separation of sedimentary organic material based upon relative reactivity, chemical information (i.e. chemical composition of pyrolysis products) is lost during the in-line combustion of pyrolysis products. A first order approximation of ramped pyrolysis/combustion system CO2 evolution, employing a simple Gaussian decomposition routine, has been useful (Rosenheim et al., 2008), but improvements may be possible. First, without prior compound-specific extractions, the molecular composition of sedimentary organic matter is unknown and/or unidentifiable. Second, even if determined as constituents of sedimentary organic material, many organic compounds have unknown or variable decomposition temperatures. Third, mixtures of organic compounds may result in significant chemistry within the pyrolysis reactor, prior to introduction of oxygen along the flow path. Gaussian decomposition of the reaction rate may be too simple to fully explain the combination of these factors. To relate both the radiocarbon age over different temperature intervals and the pyrolysis reaction thermograph (temperature (°C) vs. CO2 evolved (μmol)) obtained from PTP/CS to chemical composition of sedimentary organic material, we present a modeling framework developed based upon the ramped pyrolysis decomposition of simple mixtures of organic compounds (i.e. cellulose, lignin, plant fatty acids, etc.) often found in sedimentary organic material to account for changes in thermograph shape. The decompositions will be compositionally verified by 13C NMR analysis of pyrolysis residues from interrupted reactions. This will allow for constraint of decomposition temperatures of individual compounds as well as chemical reactions between volatilized moieties in mixtures of these compounds. We will apply this framework with 13C NMR analysis of interrupted pyrolysis residues and radiocarbon data from PTP/CS analysis of sedimentary organic material from a freshwater marsh wetland in Barataria Bay, Louisiana. We expect to characterize the bulk chemical composition during pyrolysis and as well as diagenetic changes with depth. Most importantly, we expect to constrain the potential and the limitations of this modeling framework for application to other depositional environments.

  8. Continuous Probabilistic Modeling of Tracer Stone Dispersal in Upper Regime

    NASA Astrophysics Data System (ADS)

    Hernandez Moreira, R. R.; Viparelli, E.

    2017-12-01

    Morphodynamic models that specifically account for the non-uniformity of the bed material are generally based on some form of the active layer approximation. These models have proven to be useful tools in the study of transport, erosion and deposition of non-uniform bed material in the case of channel bed aggradation and degradation. However, when local spatial effects over short time scales compared to those characterizing the changes in mean bed elevation dominate the vertical sediment fluxes, as is the presence of bedforms, active layer models cannot capture key details of the sediment transport process. To overcome the limitations of active layer based models, Parker, Paola and Leclair (PPL) proposed a continuous probabilistic modeling frameworks in which the sediment exchange between the bedload transport and the mobile bed is described in terms of probability density functions of bed elevation, entrainment and deposition. Here we present the implementation of a modified version of the PPL modeling framework for the study of tracer stones dispsersal in upper regime bedload transport conditions (i.e. upper regime plane bed at the transition between dunes and antidunes, downstream migrating antidunes and upper regime plane bed with bedload transport in sheet flow mode) in which the probability functions are based on measured time series of bed elevation fluctuations. The extension to the more general case of mixtures of sediments differing in size is the future development of the proposed work.

  9. A probabilistic framework for microarray data analysis: fundamental probability models and statistical inference.

    PubMed

    Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S

    2010-05-21

    Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Experimental investigations and geochemical modelling of site-specific fluid-fluid and fluid-rock interactions in underground storage of CO2/H2/CH4 mixtures: the H2STORE project

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Pilz, Peter

    2015-04-01

    Underground gas storage is increasingly regarded as a technically viable option for meeting the energy demand and environmental targets of many industrialized countries. Besides the long-term CO2 sequestration, energy can be chemically stored in form of CO2/CH4/H2 mixtures, for example resulting from excess wind energy. A precise estimation of the impact of such gas mixtures on the mineralogical, geochemical and petrophysical properties of specific reservoirs and caprocks is crucial for site selection and optimization of storage depth. Underground gas storage is increasingly regarded as a technically viable option for meeting environmental targets and the energy demand through storage in form of H2 or CH4, i.e. resulting from excess wind energy. Gas storage in salt caverns is nowadays a mature technology; in regions where favorable geologic structures such as salt diapires are not available, however, gas storage can only be implemented in porous media such as depleted gas and oil reservoirs or suitable saline aquifers. In such settings, a significant amount of in-situ gas components such as CO2, CH4 (and N2) will always be present, making the CO2/CH4/H2 system of particular interest. A precise estimation of the impact of their gas mixtures on the mineralogical, geochemical and petrophysical properties of specific reservoirs and caprocks is therefore crucial for site selection and optimization of storage depth. In the framework of the collaborative research project H2STORE, the feasibility of industrial-scale gas storage in porous media in several potential siliciclastic depleted gas and oil reservoirs or suitable saline aquifers is being investigated by means of experiments and modelling on actual core materials from the evaluated sites. Among them are the Altmark depleted gas reservoir in Saxony-Anhalt and the Ketzin pilot site for CO2 storage in Brandenburg (Germany). Further sites are located in the Molasse basin in South Germany and Austria. In particular, two work packages hosted at the German Research Centre for Geosciences (GFZ) focus on the fluid-fluid and fluid-rock interactions triggered by CO2, H2 and their mixtures. Laboratory experiments expose core samples to hydrogen and CO2/hydrogen mixtures under site-specific conditions (temperatures up to 200 °C and pressure up to 300 bar). The resulting qualitative and, whereas possible, quantitative data are expected to ameliorate the precision of predictive geochemical and reactive transport modelling, which is also performed within the project. The combination of experiments, chemical and mineralogical analyses and models is needed to improve the knowledge about: (1) solubility model and mixing rule for multicomponent gas mixtures in high saline formation fluids: no data are namely available in literature for H2-charged gas mixtures in the conditions expected in the potential sites; (2) chemical reactivity of different mineral assemblages and formation fluids in a broad spectrum of P-T conditions and composition of the stored gas mixtures; (3) thermodynamics and kinetics of relevant reactions involving mineral dissolution or precipitation. The resulting amelioration of site characterization and the overall enhancement in understanding the potential processes will benefit the operational reliability, the ecological tolerance, and the economic efficiency of future energy storing plants, crucial aspects for public acceptance and for industrial investors.

  11. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  13. Mirrored continuum and molecular scale simulations of the ignition of gamma phase RDX

    NASA Astrophysics Data System (ADS)

    Stewart, D. Scott; Chaudhuri, Santanu; Joshi, Kaushik; Lee, Kibaek

    2017-01-01

    We describe the ignition of an explosive crystal of gamma-phase RDX due to a thermal hot spot with reactive molecular dynamics (RMD), with first-principles trained, reactive force field based molecular potentials that represents an extremely complex reaction network. The RMD simulation is analyzed by sorting molecular product fragments into high and low molecular weight groups, to represent identifiable components that can be interpreted by a continuum model. A continuum model based on a Gibbs formulation has a single temperature and stress state for the mixture. The continuum simulation that mirrors the atomistic simulation allows us to study the atomistic simulation in the familiar physical chemistry framework and provides an essential, continuum/atomistic link.

  14. Chained Bell Inequality Experiment with High-Efficiency Measurements

    NASA Astrophysics Data System (ADS)

    Tan, T. R.; Wan, Y.; Erickson, S.; Bierhorst, P.; Kienzler, D.; Glancy, S.; Knill, E.; Leibfried, D.; Wineland, D. J.

    2017-03-01

    We report correlation measurements on two 9Be+ ions that violate a chained Bell inequality obeyed by any local-realistic theory. The correlations can be modeled as derived from a mixture of a local-realistic probabilistic distribution and a distribution that violates the inequality. A statistical framework is formulated to quantify the local-realistic fraction allowable in the observed distribution without the fair-sampling or independent-and-identical-distributions assumptions. We exclude models of our experiment whose local-realistic fraction is above 0.327 at the 95% confidence level. This bound is significantly lower than 0.586, the minimum fraction derived from a perfect Clauser-Horne-Shimony-Holt inequality experiment. Furthermore, our data provide a device-independent certification of the deterministically created Bell states.

  15. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  16. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  17. A novel metal-organic framework for high storage and separation of acetylene at room temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Xing, E-mail: star1987@hdu.edu.cn; Wang, Huizhen; Ji, Zhenguo

    2016-09-15

    A novel 3D microporous metal-organic framework with NbO topology, [Cu{sub 2}(L)(H{sub 2}O){sub 2}]∙(DMF){sub 6}·(H{sub 2}O){sub 2} (ZJU-10, ZJU = Zhejiang University; H{sub 4}L =2′-hydroxy-[1,1′:4′,1″-terphenyl]-3,3″,5,5″-tetracarboxylic acid; DMF =N,N-dimethylformamide), has been synthesized and structurally characterized. With suitable pore sizes and open Cu{sup 2+} sites, ZJU-10a exhibits high BET surface area of 2392 m{sup 2}/g, as well as moderately high C{sub 2}H{sub 2} volumetric uptake capacity of 132 cm{sup 3}/cm{sup 3}. Meanwhile, ZJU-10a is a promising porous material for separation of acetylene from methane and carbon dioxide gas mixtures at room temperature. - Graphical abstract: A new NbO-type microporous metal-organic framework ZJU-10 withmore » suitable pore size and open Cu{sup 2+} sites was synthesized to realize the strong interaction with acetylene molecules, which can separate the acetylene from methane and carbon dioxane gas mixtures at room temperature. Display Omitted - Highlights: • A novel 3D NbO-type microporous metal-organic framework ZJU-10 was solvothermally synthesized and structurally characterized. • ZJU-10a exhibits high BET surface area of 2392 m{sup 2}/g. • ZJU-10a shows a moderately high C{sub 2}H{sub 2} gravimetric (volumetric) uptake capacity of 174 (132) cm{sup 3}/g at 298 K and 1 bar. • ZJU-10a can separate acetylene from methane and carbon dioxide gas mixtures at room temperature.« less

  18. Supramolecular binding and separation of hydrocarbons within a functionalized porous metal–organic framework

    DOE PAGES

    Yang, Sihai; Ramirez-Cuesta, Anibal J.; Newby, Ruth; ...

    2014-12-01

    Supramolecular interactions are fundamental to host–guest binding in many chemical and biological processes. Direct visualization of such supramolecular interactions within host–guest systems is extremely challenging, but crucial to understanding their function. Within this paper, we report a comprehensive study that combines neutron scattering, synchrotron X-ray and neutron diffraction, and computational modelling to define the detailed binding at a molecular level of acetylene, ethylene and ethane within the porous host NOTT-300. This study reveals simultaneous and cooperative hydrogen-bonding, π···π stacking interactions and intermolecular dipole interactions in the binding of acetylene and ethylene to give up to 12 individual weak supramolecular interactionsmore » aligned within the host to form an optimal geometry for the selective binding of hydrocarbons. In addition, we also report the cooperative binding of a mixture of acetylene and ethylene within the porous host, together with the corresponding breakthrough experiments and analysis of adsorption isotherms of gas mixtures.« less

  19. To kill a kangaroo: understanding the decision to pursue high-risk/high-gain resources.

    PubMed

    Jones, James Holland; Bird, Rebecca Bliege; Bird, Douglas W

    2013-09-22

    In this paper, we attempt to understand hunter-gatherer foraging decisions about prey that vary in both the mean and variance of energy return using an expected utility framework. We show that for skewed distributions of energetic returns, the standard linear variance discounting (LVD) model for risk-sensitive foraging can produce quite misleading results. In addition to creating difficulties for the LVD model, the skewed distributions characteristic of hunting returns create challenges for estimating probability distribution functions required for expected utility. We present a solution using a two-component finite mixture model for foraging returns. We then use detailed foraging returns data based on focal follows of individual hunters in Western Australia hunting for high-risk/high-gain (hill kangaroo) and relatively low-risk/low-gain (sand monitor) prey. Using probability densities for the two resources estimated from the mixture models, combined with theoretically sensible utility curves characterized by diminishing marginal utility for the highest returns, we find that the expected utility of the sand monitors greatly exceeds that of kangaroos despite the fact that the mean energy return for kangaroos is nearly twice as large as that for sand monitors. We conclude that the decision to hunt hill kangaroos does not arise simply as part of an energetic utility-maximization strategy and that additional social, political or symbolic benefits must accrue to hunters of this highly variable prey.

  20. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  1. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  2. A framework for developing research protocols for evaluation of microbial hazards and controls during production that pertain to the quality of agricultural water contacting fresh produce that may be consumed raw

    USDA-ARS?s Scientific Manuscript database

    Agricultural water may contact fresh produce during irrigation and/or when crop protection sprays (e.g., cooling to prevent sunburn, frost protection, and agrochemical mixtures) are applied. This document provides a framework for designing research studies that would add to our understanding of preh...

  3. Effective Connectivity Modeling for fMRI: Six Issues and Possible Solutions Using Linear Dynamic Systems

    PubMed Central

    Smith, Jason F.; Pillai, Ajay; Chen, Kewei; Horwitz, Barry

    2012-01-01

    Analysis of directionally specific or causal interactions between regions in functional magnetic resonance imaging (fMRI) data has proliferated. Here we identify six issues with existing effective connectivity methods that need to be addressed. The issues are discussed within the framework of linear dynamic systems for fMRI (LDSf). The first concerns the use of deterministic models to identify inter-regional effective connectivity. We show that deterministic dynamics are incapable of identifying the trial-to-trial variability typically investigated as the marker of connectivity while stochastic models can capture this variability. The second concerns the simplistic (constant) connectivity modeled by most methods. Connectivity parameters of the LDSf model can vary at the same timescale as the input data. Further, extending LDSf to mixtures of multiple models provides more robust connectivity variation. The third concerns the correct identification of the network itself including the number and anatomical origin of the network nodes. Augmentation of the LDSf state space can identify additional nodes of a network. The fourth concerns the locus of the signal used as a “node” in a network. A novel extension LDSf incorporating sparse canonical correlations can select most relevant voxels from an anatomically defined region based on connectivity. The fifth concerns connection interpretation. Individual parameter differences have received most attention. We present alternative network descriptors of connectivity changes which consider the whole network. The sixth concerns the temporal resolution of fMRI data relative to the timescale of the inter-regional interactions in the brain. LDSf includes an “instantaneous” connection term to capture connectivity occurring at timescales faster than the data resolution. The LDS framework can also be extended to statistically combine fMRI and EEG data. The LDSf framework is a promising foundation for effective connectivity analysis. PMID:22279430

  4. Automated Assessment of Disease Progression in Acute Myeloid Leukemia by Probabilistic Analysis of Flow Cytometry Data

    PubMed Central

    Rajwa, Bartek; Wallace, Paul K.; Griffiths, Elizabeth A.; Dundar, Murat

    2017-01-01

    Objective Flow cytometry (FC) is a widely acknowledged technology in diagnosis of acute myeloid leukemia (AML) and has been indispensable in determining progression of the disease. Although FC plays a key role as a post-therapy prognosticator and evaluator of therapeutic efficacy, the manual analysis of cytometry data is a barrier to optimization of reproducibility and objectivity. This study investigates the utility of our recently introduced non-parametric Bayesian framework in accurately predicting the direction of change in disease progression in AML patients using FC data. Methods The highly flexible non-parametric Bayesian model based on the infinite mixture of infinite Gaussian mixtures is used for jointly modeling data from multiple FC samples to automatically identify functionally distinct cell populations and their local realizations. Phenotype vectors are obtained by characterizing each sample by the proportions of recovered cell populations, which are in turn used to predict the direction of change in disease progression for each patient. Results We used 200 diseased and non-diseased immunophenotypic panels for training and tested the system with 36 additional AML cases collected at multiple time points. The proposed framework identified the change in direction of disease progression with accuracies of 90% (9 out of 10) for relapsing cases and 100% (26 out of 26) for the remaining cases. Conclusions We believe that these promising results are an important first step towards the development of automated predictive systems for disease monitoring and continuous response evaluation. Significance Automated measurement and monitoring of therapeutic response is critical not only for objective evaluation of disease status prognosis but also for timely assessment of treatment strategies. PMID:27416585

  5. Labyrinthine flows across multilayer graphene-based membranes

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki

    Graphene-based materials have recently found extremely wide applications for fluidic purposes thanks to remarkable developments in micro-/nano-fabrication techniques. In particular, high permeability and specific selectivity have been reported for these graphene-based membranes, such as the graphene-oxide membranes, with however controversial experimental results. There is therefore an urgent need to propose a theoretical framework of fluid transport in these architectures in order to rationalize the experimental results.In this presentation, we report a theoretical study of mass transport across multilayer graphene based membranes, which we benchmark by atomic-scale molecular dynamics. Specifically, we consider the water flow across multiple graphene layers with an inter-layer distance ranging from sub-nanometer to a few nanometers. The graphene layers have nanoslits aligned in a staggered fashion, and thus the water flows involve multiple twists and turns. We compare the continuum model predictions for the permeability with the lattice Boltzmann calculations and molecular dynamics simulations. The highlight is that, in spite of extreme confinement, the permeability across the graphene-based membrane is quantitatively predicted on the basis of a properly designed continuum model. The framework of this study constitutes a benchmark to which we compare favourably published experimental data.In addition, flow properties of a water-ethanol mixture are presented, demonstrating the possibility of a novel separation technique. While the membrane is permeable to both pure liquids, it exhibits a counter-intuitive ``self-semi-permeability'' to water in the presence of the mixture. This suggests a robust and versatile membrane-based separation method built on a pressure-driven reverse-osmosis process, which is considerably less energy consuming than distillation processes. The author acknowledges the ERC project Micromegas and the ANR projects BlueEnergy and Equip@Meso.

  6. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  7. Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497

  8. LiBSi2: a tetrahedral semiconductor framework from boron and silicon atoms bearing lithium atoms in the channels.

    PubMed

    Zeilinger, Michael; van Wüllen, Leo; Benson, Daryn; Kranak, Verina F; Konar, Sumit; Fässler, Thomas F; Häussermann, Ulrich

    2013-06-03

    Silicon swallows up boron: The novel open tetrahedral framework structure (OTF) of the Zintl phase LiBSi2 was made by applying high pressure to a mixture of LiB and elemental silicon. The compound represents a new topology in the B-Si net (called tum), which hosts Li atoms in the channels (see picture). LiBSi2 is the first example where B and Si atoms form an ordered common framework structure with B engaged exclusively in heteronuclear B-Si contacts.

  9. Investigation of hydrometeor classification uncertainties through the POLARRIS polarimetric radar simulator

    NASA Astrophysics Data System (ADS)

    Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.

    2017-12-01

    POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.

  10. Genome-Wide Transcription Profiles Reveal Genotype-Dependent Responses of Biological Pathways and Gene-Families in Daphnia Exposed to Single and Mixed Stressors

    PubMed Central

    2015-01-01

    The present study investigated the possibilities and limitations of implementing a genome-wide transcription-based approach that takes into account genetic and environmental variation to better understand the response of natural populations to stressors. When exposing two different Daphnia pulex genotypes (a cadmium-sensitive and a cadmium-tolerant one) to cadmium, the toxic cyanobacteria Microcystis aeruginosa, and their mixture, we found that observations at the transcriptomic level do not always explain observations at a higher level (growth, reproduction). For example, although cadmium elicited an adverse effect at the organismal level, almost no genes were differentially expressed after cadmium exposure. In addition, we identified oxidative stress and polyunsaturated fatty acid metabolism-related pathways, as well as trypsin and neurexin IV gene-families as candidates for the underlying causes of genotypic differences in tolerance to Microcystis. Furthermore, the whole-genome transcriptomic data of a stressor mixture allowed a better understanding of mixture responses by evaluating interactions between two stressors at the gene-expression level against the independent action baseline model. This approach has indicated that ubiquinone pathway and the MAPK serine-threonine protein kinase and collagens gene-families were enriched with genes showing an interactive effect in expression response to exposure to the mixture of the stressors, while transcription and translation-related pathways and gene-families were mostly related with genotypic differences in interactive responses to this mixture. Collectively, our results indicate that the methods we employed may improve further characterization of the possibilities and limitations of transcriptomics approaches in the adverse outcome pathway framework and in predictions of multistressor effects on natural populations. PMID:24552364

  11. Diffusion of Magnetized Binary Ionic Mixtures at Ultracold Plasma Conditions

    NASA Astrophysics Data System (ADS)

    Vidal, Keith R.; Baalrud, Scott D.

    2017-10-01

    Ultracold plasma experiments offer an accessible means to test transport theories for strongly coupled systems. Application of an external magnetic field might further increase their utility by inhibiting heating mechanisms of ions and electrons and increasing the temperature at which strong coupling effects are observed. We present results focused on developing and validating a transport theory to describe binary ionic mixtures across a wide range of coupling and magnetization strengths relevant to ultracold plasma experiments. The transport theory is an extension of the Effective Potential Theory (EPT), which has been shown to accurately model correlation effects at these conditions, to include magnetization. We focus on diffusion as it can be measured in ultracold plasma experiments. Using EPT within the framework of the Chapman-Enskog expansion, the parallel and perpendicular self and interdiffusion coefficients for binary ionic mixtures with varying mass ratios are calculated and are compared to molecular dynamics simulations. The theory is found to accurately extend Braginskii-like transport to stronger coupling, but to break down when the magnetization strength becomes large enough that the typical gyroradius is smaller than the interaction scale length. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0221.

  12. The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials

    DOE PAGES

    Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...

    2017-03-30

    For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less

  13. The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witman, Matthew; Ling, Sanliang; Jawahery, Sudi

    For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less

  14. Estimating statistical power for open-enrollment group treatment trials.

    PubMed

    Morgan-Lopez, Antonio A; Saavedra, Lissette M; Hien, Denise A; Fals-Stewart, William

    2011-01-01

    Modeling turnover in group membership has been identified as a key barrier contributing to a disconnect between the manner in which behavioral treatment is conducted (open-enrollment groups) and the designs of substance abuse treatment trials (closed-enrollment groups, individual therapy). Latent class pattern mixture models (LCPMMs) are emerging tools for modeling data from open-enrollment groups with membership turnover in recently proposed treatment trials. The current article illustrates an approach to conducting power analyses for open-enrollment designs based on the Monte Carlo simulation of LCPMM models using parameters derived from published data from a randomized controlled trial comparing Seeking Safety to a Community Care condition for women presenting with comorbid posttraumatic stress disorder and substance use disorders. The example addresses discrepancies between the analysis framework assumed in power analyses of many recently proposed open-enrollment trials and the proposed use of LCPMM for data analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Models of chromatin spatial organisation in the cell nucleus

    NASA Astrophysics Data System (ADS)

    Nicodemi, Mario

    2014-03-01

    In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.

  16. Controlling the position of a stabilized detonation wave in a supersonic gas mixture flow in a plane channel

    NASA Astrophysics Data System (ADS)

    Levin, V. A.; Zhuravskaya, T. A.

    2017-03-01

    Stabilization of a detonation wave in a stoichiometric hydrogen-air mixture flowing at a supersonic velocity into a plane symmetric channel with constriction has been studied in the framework of a detailed kinetic mechanism of the chemical interaction. Conditions ensuring the formation of a thrust-producing f low with a stabilized detonation wave in the channel are determined. The inf luence of the inf low Mach number, dustiness of the combustible gas mixture supplied to the channel, and output cross-section size on the position of a stabilized detonation wave in the f low has been analyzed with a view to increasing the efficiency of detonation combustion of the gas mixture. It is established that thrust-producing flow with a stabilized detonation wave can be formed in the channel without any energy consumption.

  17. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  18. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  19. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  20. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  1. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  2. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762

  3. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.

  4. Theory and methodology for utilizing genes as biomarkers to determine potential biological mixtures.

    PubMed

    Shrestha, Sadeep; Smith, Michael W; Beaty, Terri H; Strathdee, Steffanie A

    2005-01-01

    Genetically determined mixture information can be used as a surrogate for physical or behavioral characteristics in epidemiological studies examining research questions related to socially stigmatized behaviors and horizontally transmitted infections. A new measure, the probability of mixture discrimination (PMD), was developed to aid mixture analysis that estimates the ability to differentiate single from multiple genomes in biological mixtures. Four autosomal short tandem repeats (STRs) were identified, genotyped and evaluated in African American, European American, Hispanic, and Chinese individuals to estimate PMD. Theoretical PMD frameworks were also developed for autosomal and sex-linked (X and Y) STR markers in potential male/male, male/female and female/female mixtures. Autosomal STRs genetically determine the presence of multiple genomes in mixture samples of unknown genders with more power than the apparently simpler X and Y chromosome STRs. Evaluation of four autosomal STR loci enables the detection of mixtures of DNA from multiple sources with above 99% probability in all four racial/ethnic populations. The genetic-based approach has applications in epidemiology that provide viable alternatives to survey-based study designs. The analysis of genes as biomarkers can be used as a gold standard for validating measurements from self-reported behaviors that tend to be sensitive or socially stigmatizing, such as those involving sex and drugs.

  5. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  6. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  7. Considering the cumulative risk of mixtures of chemicals – A challenge for policy makers

    PubMed Central

    2012-01-01

    Background The current paradigm for the assessment of the health risk of chemical substances focuses primarily on the effects of individual substances for determining the doses of toxicological concern in order to inform appropriately the regulatory process. These policy instruments place varying requirements on health and safety data of chemicals in the environment. REACH focuses on safety of individual substances; yet all the other facets of public health policy that relate to chemical stressors put emphasis on the effects of combined exposure to mixtures of chemical and physical agents. This emphasis brings about methodological problems linked to the complexity of the respective exposure pathways; the effect (more complex than simple additivity) of mixtures (the so-called 'cocktail effect'); dose extrapolation, i.e. the extrapolation of the validity of dose-response data to dose ranges that extend beyond the levels used for the derivation of the original dose-response relationship; the integrated use of toxicity data across species (including human clinical, epidemiological and biomonitoring data); and variation in inter-individual susceptibility associated with both genetic and environmental factors. Methods In this paper we give an overview of the main methodologies available today to estimate the human health risk of environmental chemical mixtures, ranging from dose addition to independent action, and from ignoring interactions among the mixture constituents to modelling their biological fate taking into account the biochemical interactions affecting both internal exposure and the toxic potency of the mixture. Results We discuss their applicability, possible options available to policy makers and the difficulties and potential pitfalls in implementing these methodologies in the frame of the currently existing policy framework in the European Union. Finally, we suggest a pragmatic solution for policy/regulatory action that would facilitate the evaluation of the health effects of chemical mixtures in the environment and consumer products. Conclusions One universally applicable methodology does not yet exist. Therefore, a pragmatic, tiered approach to regulatory risk assessment of chemical mixtures is suggested, encompassing (a) the use of dose addition to calculate a hazard index that takes into account interactions among mixture components; and (b) the use of the connectivity approach in data-rich situations to integrate mechanistic knowledge at different scales of biological organization. PMID:22759500

  8. Efficient Reservoir Simulation with Cubic Plus Association and Cross-Association Equation of State for Multicomponent Three-Phase Compressible Flow with Applications in CO2 Storage and Methane Leakage

    NASA Astrophysics Data System (ADS)

    Moortgat, J.

    2017-12-01

    We present novel simulation tools to model multiphase multicomponent flow and transport in porous media for mixtures that contain non-polar hydrocarbons, self-associating polar water, and cross-associating molecules like methane, ethane, unsaturated hydrocarbons, CO2 and H2S. Such mixtures often occur when CO2 is injected and stored in saline aquifers, or when methane is leaking into groundwater. To accurately predict the species transfer between aqueous, gaseous and oleic phases, and the subsequent change in phase properties, the self- and cross-associating behavior of molecules needs to be taken into account, particularly at the typical temperatures and pressures in deep formations. The Cubic-Plus-Association equation-of-state (EOS) has been demonstrated to be highly accurate for such problems but its excessive computational cost has prevented widespread use in reservoir simulators. We discuss the thermodynamical framework and develop sophisticated numerical algorithms that allow reservoir simulations with efficiencies comparable to a simple cubic EOS. This approach improves our predictive powers for highly nonlinear fluid behavior related to geological carbon sequestration, such as density driven flow and natural convection (solubility trapping), evaporation of water into the CO2-rich gas phase, and competitive dissolution-evaporation when CO2 is injected in, e.g., methane saturated aquifers. Several examples demonstrate the accuracy and robustness of this EOS framework for complex applications.

  9. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    PubMed

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.

  10. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  11. Towards a Mobile-Based Platform for Traceability Control and Hazard Analysis in the Context of Parenteral Nutrition: Description of a Framework and a Prototype App

    PubMed Central

    2016-01-01

    Background The parenteral nutrient (PN) mixtures may pose great risks of physical, microbiological, and chemical contamination during their preparation, storage, distribution, and administration. These potential hazards must be controlled under high levels of excellence to prevent any serious complications for the patients. As a result, management control and traceability of any of these medications is of utmost relevance for the patient care, along with ensuring treatment continuity and adherence. Objective The aim of this study is to develop a mobile-based platform to support the control procedures and traceability services in the domain of parenteral nutrient (PN) mixtures in an efficient and nonintrusive manner. Methods A comprehensive approach combining techniques of software engineering and knowledge engineering was used for the characterization of the framework. Local try-outs for evaluation were performed in a number of application areas, carrying out a test/retest monitoring to detect possible errors or conflicts in different contexts and control processes throughout the entire cycle of PN. From these data, the absolute and relative frequencies (percentages) were calculated. Results A mobile application for the Android operating system was developed. This application allows reading different types of tags and interacts with the local server according to a proposed model. Also, through an internal caching mechanism, the availability of the system is preserved even in the event of problems with the network connection. A set of 1040 test traces were generated for the assessment of the system under various environments tested. Among those, 102 traces (9.81%) involved conflictive situations that were properly taken care of in this paper by suggesting solutions to overcome them. Conclusions A mobile oriented system was generated and tested in order to allow enhanced control and quality management of PN mixtures that is easy to integrate into the daily praxis of health care processes. PMID:27269189

  12. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  13. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  14. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  15. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  16. Modeling semantic aspects for cross-media image indexing.

    PubMed

    Monay, Florent; Gatica-Perez, Daniel

    2007-10-01

    To go beyond the query-by-example paradigm in image retrieval, there is a need for semantic indexing of large image collections for intuitive text-based image search. Different models have been proposed to learn the dependencies between the visual content of an image set and the associated text captions, then allowing for the automatic creation of semantic indices for unannotated images. The task, however, remains unsolved. In this paper, we present three alternatives to learn a Probabilistic Latent Semantic Analysis model (PLSA) for annotated images, and evaluate their respective performance for automatic image indexing. Under the PLSA assumptions, an image is modeled as a mixture of latent aspects that generates both image features and text captions, and we investigate three ways to learn the mixture of aspects. We also propose a more discriminative image representation than the traditional Blob histogram, concatenating quantized local color information and quantized local texture descriptors. The first learning procedure of a PLSA model for annotated images is a standard EM algorithm, which implicitly assumes that the visual and the textual modalities can be treated equivalently. The other two models are based on an asymmetric PLSA learning, allowing to constrain the definition of the latent space on the visual or on the textual modality. We demonstrate that the textual modality is more appropriate to learn a semantically meaningful latent space, which translates into improved annotation performance. A comparison of our learning algorithms with respect to recent methods on a standard dataset is presented, and a detailed evaluation of the performance shows the validity of our framework.

  17. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  18. Fabrication of COF-MOF Composite Membranes and Their Highly Selective Separation of H2/CO2.

    PubMed

    Fu, Jingru; Das, Saikat; Xing, Guolong; Ben, Teng; Valtchev, Valentin; Qiu, Shilun

    2016-06-22

    The search for new types of membrane materials has been of continuous interest in both academia and industry, given their importance in a plethora of applications, particularly for energy-efficient separation technology. In this contribution, we demonstrate for the first time that a metal-organic framework (MOF) can be grown on the covalent-organic framework (COF) membrane to fabricate COF-MOF composite membranes. The resultant COF-MOF composite membranes demonstrate higher separation selectivity of H2/CO2 gas mixtures than the individual COF and MOF membranes. A sound proof for the synergy between two porous materials is the fact that the COF-MOF composite membranes surpass the Robeson upper bound of polymer membranes for mixture separation of a H2/CO2 gas pair and are among the best gas separation MOF membranes reported thus far.

  19. Influence of the chlorine concentration on the radiation efficiency of a XeCl exciplex lamp

    NASA Astrophysics Data System (ADS)

    Avtaeva, S. V.; Sosnin, E. A.; Saghi, B.; Panarin, V. A.; Rahmani, B.

    2013-09-01

    The influence of the chlorine concentration on the radiation efficiency of coaxial exciplex lamps (excilamps) excited by a dielectric barrier discharge (DBD) in binary Xe-Cl2 mixtures at pressures of 240-250 Torr is investigated experimentally and theoretically. The experiments were carried out at Cl2 concentrations in the range of 0.01-1%. The DBD characteristics were calculated in the framework of a one-dimensional hydrodynamic model at Cl2 concentrations in the range of 0.1-5%. It is found that the radiation intensities of the emission bands of Xe*2(172 nm) and XeCl* (308 nm) are comparable when the chlorine concentration in the mixture is in the range of 0.01-0.1%. In this case, in the mixture, the radiation intensity of the Xe*2 molecule rapidly decreases with increasing Cl2 concentration and, at a chlorine concentration of ≥0.2%, the radiation of the B → X band of XeCl* molecules with a peak at 308 nm dominates in the discharge radiation. The radiation efficiency of this band reaches its maximum value at chlorine concentrations in the range of 0.4-0.5%. The calculated efficiencies of DBD radiation exceed those obtained experimentally. This is due to limitations of the one-dimensional model, which assumes the discharge to be uniform in the transverse direction, whereas the actual excilamp discharge is highly inhomogeneous. The influence of the chlorine concentration on the properties of the DBD plasma in binary Xe-Cl2 mixtures is studied numerically. It is shown that an increase in the Cl2 concentration in the mixture leads to the attachment of electrons to chlorine atoms and a decrease in the electron density and discharge conductivity. As a result, the electric field and the voltage drop across the discharge gap increase, which, in turn, leads to an increase in the average electron energy and the probability of dissociation of Cl2 molecules and ionization of Xe atoms and Cl2 molecules. The total energy deposited in the discharge rises with increasing chlorine concentration due to an increase in the power spent on the heating of positive and negative ions. The power dissipated by electrons decreases with increasing chlorine concentration in the working mixture. Recommendations on the choice of the chlorine content in the mixture for reducing the intensity of VUV radiation of the second continuum of the Xe*2 excimer without a substantial decrease in the excilamp efficiency are formulated.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avtaeva, S. V., E-mail: s_avtaeva@mail.ru; Sosnin, E. A.; Saghi, B.

    The influence of the chlorine concentration on the radiation efficiency of coaxial exciplex lamps (excilamps) excited by a dielectric barrier discharge (DBD) in binary Xe-Cl{sub 2} mixtures at pressures of 240–250 Torr is investigated experimentally and theoretically. The experiments were carried out at Cl{sub 2} concentrations in the range of 0.01–1%. The DBD characteristics were calculated in the framework of a one-dimensional hydrodynamic model at Cl{sub 2} concentrations in the range of 0.1–5%. It is found that the radiation intensities of the emission bands of Xe*{sub 2}(172 nm) and XeCl* (308 nm) are comparable when the chlorine concentration in themore » mixture is in the range of 0.01–0.1%. In this case, in the mixture, the radiation intensity of the Xe*{sub 2} molecule rapidly decreases with increasing Cl{sub 2} concentration and, at a chlorine concentration of ≥0.2%, the radiation of the B → X band of XeCl* molecules with a peak at 308 nm dominates in the discharge radiation. The radiation efficiency of this band reaches its maximum value at chlorine concentrations in the range of 0.4–0.5%. The calculated efficiencies of DBD radiation exceed those obtained experimentally. This is due to limitations of the one-dimensional model, which assumes the discharge to be uniform in the transverse direction, whereas the actual excilamp discharge is highly inhomogeneous. The influence of the chlorine concentration on the properties of the DBD plasma in binary Xe-Cl{sub 2} mixtures is studied numerically. It is shown that an increase in the Cl{sub 2} concentration in the mixture leads to the attachment of electrons to chlorine atoms and a decrease in the electron density and discharge conductivity. As a result, the electric field and the voltage drop across the discharge gap increase, which, in turn, leads to an increase in the average electron energy and the probability of dissociation of Cl{sub 2} molecules and ionization of Xe atoms and Cl{sub 2} molecules. The total energy deposited in the discharge rises with increasing chlorine concentration due to an increase in the power spent on the heating of positive and negative ions. The power dissipated by electrons decreases with increasing chlorine concentration in the working mixture. Recommendations on the choice of the chlorine content in the mixture for reducing the intensity of VUV radiation of the second continuum of the Xe*{sub 2} excimer without a substantial decrease in the excilamp efficiency are formulated.« less

  1. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  2. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  4. From Solvent-Free to Dilute Electrolytes: Essential Components for a Continuum Theory.

    PubMed

    Gavish, Nir; Elad, Doron; Yochelis, Arik

    2018-01-04

    The increasing number of experimental observations on highly concentrated electrolytes and ionic liquids show qualitative features that are distinct from dilute or moderately concentrated electrolytes, such as self-assembly, multiple-time relaxation, and underscreening, which all impact the emergence of fluid/solid interfaces, and the transport in these systems. Because these phenomena are not captured by existing mean-field models of electrolytes, there is a paramount need for a continuum framework for highly concentrated electrolytes and ionic liquid mixtures. In this work, we present a self-consistent spatiotemporal framework for a ternary composition that comprises ions and solvent employing a free energy that consists of short- and long-range interactions, along with an energy dissipation mechanism obtained by Onsager's relations. We show that the model can describe multiple bulk and interfacial morphologies at steady-state. Thus, the dynamic processes in the emergence of distinct morphologies become equally as important as the interactions that are specified by the free energy. The model equations not only provide insights into transport mechanisms beyond the Stokes-Einstein-Smoluchowski relations but also enable qualitative recovery of three distinct regions in the full range of the nonmonotonic electrical screening length that has been recently observed in experiments in which organic solvent is used to dilute ionic liquids.

  5. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Generative Topographic Mapping (GTM): Universal Tool for Data Visualization, Structure-Activity Modeling and Dataset Comparison.

    PubMed

    Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A

    2012-04-01

    Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  8. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  9. Development of a process for the extraction of {sup 137}Cs from acidic HLLW based on crown-calix extractant use of di-alkylamide modifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexova, J.; Sirova, M.; Rais, J.

    2008-07-01

    Within the framework of the ARTIST project of total fuel retreatment with ecological mixtures of solvents and extractants containing only C, H, O, and N atoms, a process segment of extraction of {sup 137}Cs from acidic stream was developed. The process with 25,27-Bis(1-octyloxy)calix[4]arene-crown- 6, DOC[4]C6, dissolved at its 0.01 M concentration in a mixture of 90 vol % 1-octanol and 10% dihexyl octanamide, DHOA was proposed as a viable variant due to its good multicycle performance, even with irradiated solvent, and due to the good chemical stability of the chosen combination of solvent mixture. (authors)

  10. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  11. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  12. Mechanics of adsorption-deformation coupling in porous media

    NASA Astrophysics Data System (ADS)

    Zhang, Yida

    2018-05-01

    This work extends Coussy's macroscale theory for porous materials interacting with adsorptive fluid mixtures. The solid-fluid interface is treated as an independent phase that obeys its own mass, momentum and energy balance laws. As a result, a surface strain energy term appears in the free energy balance equation of the solid phase, which further introduces the so-called adsorption stress in the constitutive equations of the porous skeleton. This establishes a fundamental link between the adsorption characteristics of the solid-fluid interface and the mechanical response of the porous media. The thermodynamic framework is quite general in that it recovers the coupled conduction laws, Gibbs isotherm and the Shuttleworth's equation for surface stress, and imposes no constraints on the magnitude of deformation and the functional form of the adsorption isotherms. A rich variety of coupling between adsorption and deformation is recovered as a result of combining different poroelastic models (isotropic vs. anisotropic, linear vs. nonlinear) and adsorption models (unary vs. mixture adsorption, uncoupled vs. stretch-dependent adsorption). These predictions are discussed against the backdrop of recent experimental data on coal swelling subjected to CO2 and CO2sbnd CH4 injections, showing the capability and versatility of the theory in capturing adsorption-induced deformation of porous materials.

  13. Structure, thermodynamic properties, and phase diagrams of few colloids confined in a spherical pore.

    PubMed

    Paganini, Iván E; Pastorino, Claudio; Urrutia, Ignacio

    2015-06-28

    We study a system of few colloids confined in a small spherical cavity with event driven molecular dynamics simulations in the canonical ensemble. The colloidal particles interact through a short range square-well potential that takes into account the basic elements of attraction and excluded-volume repulsion of the interaction among colloids. We analyze the structural and thermodynamic properties of this few-body confined system in the framework of inhomogeneous fluids theory. Pair correlation function and density profile are used to determine the structure and the spatial characteristics of the system. Pressure on the walls, internal energy, and surface quantities such as surface tension and adsorption are also analyzed for a wide range of densities and temperatures. We have characterized systems from 2 to 6 confined particles, identifying distinctive qualitative behavior over the thermodynamic plane T - ρ, in a few-particle equivalent to phase diagrams of macroscopic systems. Applying the extended law of corresponding states, the square well interaction is mapped to the Asakura-Oosawa model for colloid-polymer mixtures. We link explicitly the temperature of the confined square-well fluid to the equivalent packing fraction of polymers in the Asakura-Oosawa model. Using this approach, we study the confined system of few colloids in a colloid-polymer mixture.

  14. Structure, thermodynamic properties, and phase diagrams of few colloids confined in a spherical pore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganini, Iván E.; Pastorino, Claudio, E-mail: pastor@cnea.gov.ar; Urrutia, Ignacio, E-mail: iurrutia@cnea.gov.ar

    2015-06-28

    We study a system of few colloids confined in a small spherical cavity with event driven molecular dynamics simulations in the canonical ensemble. The colloidal particles interact through a short range square-well potential that takes into account the basic elements of attraction and excluded-volume repulsion of the interaction among colloids. We analyze the structural and thermodynamic properties of this few-body confined system in the framework of inhomogeneous fluids theory. Pair correlation function and density profile are used to determine the structure and the spatial characteristics of the system. Pressure on the walls, internal energy, and surface quantities such as surfacemore » tension and adsorption are also analyzed for a wide range of densities and temperatures. We have characterized systems from 2 to 6 confined particles, identifying distinctive qualitative behavior over the thermodynamic plane T − ρ, in a few-particle equivalent to phase diagrams of macroscopic systems. Applying the extended law of corresponding states, the square well interaction is mapped to the Asakura-Oosawa model for colloid-polymer mixtures. We link explicitly the temperature of the confined square-well fluid to the equivalent packing fraction of polymers in the Asakura-Oosawa model. Using this approach, we study the confined system of few colloids in a colloid-polymer mixture.« less

  15. Evaluation of warm mix asphalt technology in flexible pavements.

    DOT National Transportation Integrated Search

    2009-09-01

    The primary goal of this research project is to quantify the performance of field produced and placed mixtures that utilize WMA technology and develop a framework for design, construction, and implementation of this technology in Louisiana. This rese...

  16. Context-Aware Generative Adversarial Privacy

    NASA Astrophysics Data System (ADS)

    Huang, Chong; Kairouz, Peter; Chen, Xiao; Sankar, Lalitha; Rajagopal, Ram

    2017-12-01

    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.

  17. Synthesis of the zeolitic imidazolate framework ZIF-4 from the ionic liquid 1-butyl-3-methylimidazolium imidazolate

    NASA Astrophysics Data System (ADS)

    Hovestadt, Maximilian; Schwegler, Johannes; Schulz, Peter S.; Hartmann, Martin

    2018-05-01

    A new synthesis route for the zeolitic imidazolate framework ZIF-4 using imidazolium imidazolate is reported. Additionally, the ionic liquid-derived material is compared to conventional ZIF-4 with respect to the powder X-ray diffraction pattern pattern, nitrogen uptake, particle size, and separation potential for olefin/paraffin gas mixtures. Higher synthesis yields were obtained, and the different particle size affected the performance in the separation of ethane and ethylene.

  18. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  19. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. Characterizing heterogeneous cellular responses to perturbations.

    PubMed

    Slack, Michael D; Martinez, Elisabeth D; Wu, Lani F; Altschuler, Steven J

    2008-12-09

    Cellular populations have been widely observed to respond heterogeneously to perturbation. However, interpreting the observed heterogeneity is an extremely challenging problem because of the complexity of possible cellular phenotypes, the large dimension of potential perturbations, and the lack of methods for separating meaningful biological information from noise. Here, we develop an image-based approach to characterize cellular phenotypes based on patterns of signaling marker colocalization. Heterogeneous cellular populations are characterized as mixtures of phenotypically distinct subpopulations, and responses to perturbations are summarized succinctly as probabilistic redistributions of these mixtures. We apply our method to characterize the heterogeneous responses of cancer cells to a panel of drugs. We find that cells treated with drugs of (dis-)similar mechanism exhibit (dis-)similar patterns of heterogeneity. Despite the observed phenotypic diversity of cells observed within our data, low-complexity models of heterogeneity were sufficient to distinguish most classes of drug mechanism. Our approach offers a computational framework for assessing the complexity of cellular heterogeneity, investigating the degree to which perturbations induce redistributions of a limited, but nontrivial, repertoire of underlying states and revealing functional significance contained within distinct patterns of heterogeneous responses.

  1. Coordinated Hard Sphere Mixture (CHaSM): A fast approximate model for oxide and silicate melts at extreme conditions

    NASA Astrophysics Data System (ADS)

    Wolf, A. S.; Asimow, P. D.; Stevenson, D. J.

    2015-12-01

    Recent first-principles calculations (e.g. Stixrude, 2009; de Koker, 2013), shock-wave experiments (Mosenfelder, 2009), and diamond-anvil cell investigations (Sanloup, 2013) indicate that silicate melts undergo complex structural evolution at high pressure. The observed increase in cation-coordination (e.g. Karki, 2006; 2007) induces higher compressibilities and lower adiabatic thermal gradients in melts as compared with their solid counterparts. These properties are crucial for understanding the evolution of impact-generated magma oceans, which are dominated by the poorly understood behavior of silicates at mantle pressures and temperatures (e.g. Stixrude et al. 2009). Probing these conditions is difficult for both theory and experiment, especially given the large compositional space (MgO-SiO2-FeO-Al2O3-etc). We develop a new model to understand and predict the behavior of oxide and silicate melts at extreme P-T conditions (Wolf et al., 2015). The Coordinated Hard Sphere Mixture (CHaSM) extends the Hard Sphere mixture model, accounting for the range of coordination states for each cation in the liquid. Using approximate analytic expressions for the hard sphere model, this fast statistical method compliments classical and first-principles methods, providing accurate thermodynamic and structural property predictions for melts. This framework is applied to the MgO system, where model parameters are trained on a collection of crystal polymorphs, producing realistic predictions of coordination evolution and the equation of state of MgO melt over a wide P-T range. Typical Mg-coordination numbers are predicted to evolve continuously from 5.25 (0 GPa) to 8.5 (250 GPa), comparing favorably with first-principles Molecular Dynamics (MD) simulations. We begin extending the model to a simplified mantle chemistry using empirical potentials (generally accurate over moderate pressure ranges, <~30 GPa), yielding predictions rooted in statistical representations of melt structure that compare well with more time-consuming classical MD calculations. This approach also sheds light on the universality of the increasing Grüneisen parameter trend for liquids (opposite that of solids), which directly reflects their progressive evolution toward more compact solid-like structures upon compression.

  2. Mixed finite element - discontinuous finite volume element discretization of a general class of multicontinuum models

    NASA Astrophysics Data System (ADS)

    Ruiz-Baier, Ricardo; Lunati, Ivan

    2016-10-01

    We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.

  3. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  4. rhoCentralRfFoam: An OpenFOAM solver for high speed chemically active flows - Simulation of planar detonations -

    NASA Astrophysics Data System (ADS)

    Gutiérrez Marcantoni, L. F.; Tamagno, J.; Elaskar, S.

    2017-10-01

    A new solver developed within the framework of OpenFOAM 2.3.0, called rhoCentralRfFoam which can be interpreted like an evolution of rhoCentralFoam, is presented. Its use, performing numerical simulations on initiation and propagation of planar detonation waves in combustible mixtures H2-Air and H2-O2-Ar, is described. Unsteady one dimensional (1D) Euler equations coupled with sources to take into account chemical activity, are numerically solved using the Kurganov, Noelle and Petrova second order scheme in a domain discretized with finite volumes. The computational code can work with any number of species and its corresponding reactions, but here it was tested with 13 chemically active species (one species inert), and 33 elementary reactions. A gaseous igniter which acts like a shock-tube driver, and powerful enough to generate a strong shock capable of triggering exothermic chemical reactions in fuel mixtures, is used to start planar detonations. The following main aspects of planar detonations are here, treated: induction time of combustible mixtures cited above and required mesh resolutions; convergence of overdriven detonations to Chapman-Jouguet states; detonation structure (ZND model); and the use of reflected shocks to determine induction times experimentally. The rhoCentralRfFoam code was verified comparing numerical results and it was validated, through analytical results and experimental data.

  5. The Kirkwood-Buff theory of solutions and the local composition of liquid mixtures.

    PubMed

    Shulgin, Ivan L; Ruckenstein, Eli

    2006-06-29

    The present paper is devoted to the local composition of liquid mixtures calculated in the framework of the Kirkwood-Buff theory of solutions. A new method is suggested to calculate the excess (or deficit) number of various molecules around a selected (central) molecule in binary and multicomponent liquid mixtures in terms of measurable macroscopic thermodynamic quantities, such as the derivatives of the chemical potentials with respect to concentrations, the isothermal compressibility, and the partial molar volumes. This method accounts for an inaccessible volume due to the presence of a central molecule and is applied to binary and ternary mixtures. For the ideal binary mixture it is shown that because of the difference in the volumes of the pure components there is an excess (or deficit) number of different molecules around a central molecule. The excess (or deficit) becomes zero when the components of the ideal binary mixture have the same volume. The new method is also applied to methanol + water and 2-propanol + water mixtures. In the case of the 2-propanol + water mixture, the new method, in contrast to the other ones, indicates that clusters dominated by 2-propanol disappear at high alcohol mole fractions, in agreement with experimental observations. Finally, it is shown that the application of the new procedure to the ternary mixture water/protein/cosolvent at infinite dilution of the protein led to almost the same results as the methods involving a reference state.

  6. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  7. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  8. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  9. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  10. SAR image segmentation using skeleton-based fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Cao, Yun Yi; Chen, Yan Qiu

    2003-06-01

    SAR image segmentation can be converted to a clustering problem in which pixels or small patches are grouped together based on local feature information. In this paper, we present a novel framework for segmentation. The segmentation goal is achieved by unsupervised clustering upon characteristic descriptors extracted from local patches. The mixture model of characteristic descriptor, which combines intensity and texture feature, is investigated. The unsupervised algorithm is derived from the recently proposed Skeleton-Based Data Labeling method. Skeletons are constructed as prototypes of clusters to represent arbitrary latent structures in image data. Segmentation using Skeleton-Based Fuzzy Clustering is able to detect the types of surfaces appeared in SAR images automatically without any user input.

  11. Multiscale Modeling of Mesoscale and Interfacial Phenomena

    NASA Astrophysics Data System (ADS)

    Petsev, Nikolai Dimitrov

    With rapidly emerging technologies that feature interfaces modified at the nanoscale, traditional macroscopic models are pushed to their limits to explain phenomena where molecular processes can play a key role. Often, such problems appear to defy explanation when treated with coarse-grained continuum models alone, yet remain prohibitively expensive from a molecular simulation perspective. A prominent example is surface nanobubbles: nanoscopic gaseous domains typically found on hydrophobic surfaces that have puzzled researchers for over two decades due to their unusually long lifetimes. We show how an entirely macroscopic, non-equilibrium model explains many of their anomalous properties, including their stability and abnormally small gas-side contact angles. From this purely transport perspective, we investigate how factors such as temperature and saturation affect nanobubbles, providing numerous experimentally testable predictions. However, recent work also emphasizes the relevance of molecular-scale phenomena that cannot be described in terms of bulk phases or pristine interfaces. This is true for nanobubbles as well, whose nanoscale heights may require molecular detail to capture the relevant physics, in particular near the bubble three-phase contact line. Therefore, there is a clear need for general ways to link molecular granularity and behavior with large-scale continuum models in the treatment of many interfacial problems. In light of this, we have developed a general set of simulation strategies that couple mesoscale particle-based continuum models to molecular regions simulated through conventional molecular dynamics (MD). In addition, we derived a transport model for binary mixtures that opens the possibility for a wide range of applications in biological and drug delivery problems, and is readily reconciled with our hybrid MD-continuum techniques. Approaches that couple multiple length scales for fluid mixtures are largely absent in the literature, and we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.

  12. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  13. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  14. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  15. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  16. Foundations for statistical-physical precipitation retrieval from passive microwave satellite measurements. I - Brightness-temperature properties of a time-dependent cloud-radiation model

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu

    1992-01-01

    The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.

  17. Molecular simulations for adsorption and separation of natural gas in IRMOF-1 and Cu-BTC metal-organic frameworks.

    PubMed

    Martín-Calvo, Ana; García-Pérez, Elena; Manuel Castillo, Juan; Calero, Sofia

    2008-12-21

    We use Monte Carlo simulations to study the adsorption and separation of the natural gas components in IRMOF-1 and Cu-BTC metal-organic frameworks. We computed the adsorption isotherms of pure components, binary, and five-component mixtures analyzing the siting of the molecules in the structure for the different loadings. The bulk compositions studied for the mixtures were 50 : 50 and 90 : 10 for CH4-CO2, 90 : 10 for N2-CO2, and 95 : 2.0 : 1.5 : 1.0 : 0.5 for the CH4-C2H6-N2-CO2-C3H8 mixture. We choose this composition because it is similar to an average sample of natural gas. Our simulations show that CO2 is preferentially adsorbed over propane, ethane, methane and N2 in the complete pressure range under study. Longer alkanes are favored over shorter alkanes and the lowest adsorption corresponds to N2. Though IRMOF-1 has a significantly higher adsorption capacity than Cu-BTC, the adsorption selectivity of CO2 over CH4 and N2 is found to be higher in the latter, proving that the separation efficiency is largely affected by the shape, the atomic composition and the type of linkers of the structure.

  18. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  19. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  20. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  1. A mathematical framework for combining decisions of multiple experts toward accurate and remote diagnosis of malaria using tele-microscopy.

    PubMed

    Mavandadi, Sam; Feng, Steve; Yu, Frank; Dimitrov, Stoyan; Nielsen-Saines, Karin; Prescott, William R; Ozcan, Aydogan

    2012-01-01

    We propose a methodology for digitally fusing diagnostic decisions made by multiple medical experts in order to improve accuracy of diagnosis. Toward this goal, we report an experimental study involving nine experts, where each one was given more than 8,000 digital microscopic images of individual human red blood cells and asked to identify malaria infected cells. The results of this experiment reveal that even highly trained medical experts are not always self-consistent in their diagnostic decisions and that there exists a fair level of disagreement among experts, even for binary decisions (i.e., infected vs. uninfected). To tackle this general medical diagnosis problem, we propose a probabilistic algorithm to fuse the decisions made by trained medical experts to robustly achieve higher levels of accuracy when compared to individual experts making such decisions. By modelling the decisions of experts as a three component mixture model and solving for the underlying parameters using the Expectation Maximisation algorithm, we demonstrate the efficacy of our approach which significantly improves the overall diagnostic accuracy of malaria infected cells. Additionally, we present a mathematical framework for performing 'slide-level' diagnosis by using individual 'cell-level' diagnosis data, shedding more light on the statistical rules that should govern the routine practice in examination of e.g., thin blood smear samples. This framework could be generalized for various other tele-pathology needs, and can be used by trained experts within an efficient tele-medicine platform.

  2. Towards an Online Seizure Advisory System-An Adaptive Seizure Prediction Framework Using Active Learning Heuristics.

    PubMed

    Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana

    2018-05-24

    In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.

  3. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  4. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  5. Analysis of pulsating spray flames propagating in lean two-phase mixtures with unity Lewis number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicoli, C.; Haldenwang, P.; Suard, S.

    2005-11-01

    Pulsating (or oscillatory) spray flames have recently been observed in experiments on two-phase combustion. Numerical studies have pointed out that such front oscillations can be obtained even with very simple models of homogeneous two-phase mixtures, including elementary vaporization schemes. The paper presents an analytical approach within the simple framework of the thermal-diffusive model, which is complemented by a vaporization rate independent of gas temperature, as soon as the latter reaches a certain thermal threshold ({theta}{sub v} in reduced form). The study involves the Damkoehler number (Da), the ratio of chemical reaction rate to vaporization rate, and the Zeldovich number (Ze)more » as essential parameters. We use the standard asymptotic method based on matched expansions in terms of 1/Ze. Linear analysis of two-phase flame stability is performed by studying, in the absence of differential diffusive effects (unity Lewis number), the linear growth rate of 2-D perturbations added to steady plane solutions and characterized by wavenumber k in the direction transverse to spreading. A domain of existence is found for the pulsating regime. It corresponds to mixture characteristics often met in air-fuel two-phase systems: low boiling temperature ({theta}{sub v} << 1), reaction rate not higher than vaporization rate (Da < 1, i.e., small droplets), and activation temperature assumed to be high compared with flame temperature (Ze {>=} 10). Satisfactory comparison with numerical simulations confirms the validity of the analytical approach; in particular, positive growth rates have been found for planar perturbations (k = 0) and for wrinkled fronts (k {ne} 0). Finally, comparison between predicted frequencies and experimental measurements is discussed.« less

  6. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  7. Combined Molecular Dynamics Simulation-Molecular-Thermodynamic Theory Framework for Predicting Surface Tensions.

    PubMed

    Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben

    2017-08-22

    A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.

  8. Investigating the validity of the Knudsen prescription for diffusivities in a mesoporous covalent organic framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Rajamani; van Baten, Jasper M.

    2011-04-27

    Molecular dynamics (MD) simulations were performed to determine the self-diffusivity (D i,self) and the Maxwell–Stefan diffusivity (Ð I) of hydrogen, argon, carbon dioxide, methane, ethane, propane, n-butane, n-pentane, and n-hexane in BTP-COF, which is a covalent organic framework (COF) that has one-dimensional 3.4-nm-sized channels. The MD simulations show that the zero-loading diffusivity (Ð I(0)) is consistently lower, by up to a factor of 10, than the Knudsen diffusivity (D i,Kn) values. The ratio Ð I(0)/D i,Kn is found to correlate with the isosteric heat of adsorption, which, in turn, is a reflection of the binding energy for adsorption on themore » pore walls: the stronger the binding energy, the lower the ratio Ð I(0)/D i,Kn. The diffusion selectivity, which is defined by the ratio D 1,self/D 2,self for binary mixtures, was determined to be significantly different from the Knudsen selectivity (M 2/M 1) 1/2, where M I is the molar mass of species i. For mixtures in which component 2 is more strongly adsorbed than component 1, the expression (D 1,self/D 2,self)/(M 2/M 1)1/2 has values in the range of 1–10; the departures from the Knudsen selectivity increased with increasing differences in adsorption strengths of the constituent species. The results of this study have implications in the modeling of diffusion within mesoporous structures, such as MCM-41 and SBA-15.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motevaselian, M. H.; Mashayak, S. Y.; Aluru, N. R., E-mail: aluru@illinois.edu

    Empirical potential-based quasi-continuum theory (EQT) provides a route to incorporate atomistic detail into continuum framework such as the Nernst-Planck equation. EQT can also be used to construct a grand potential functional for classical density functional theory (cDFT). The combination of EQT and cDFT provides a simple and fast approach to predict the inhomogeneous density, potential profiles, and thermodynamic properties of confined fluids. We extend the EQT-cDFT approach to confined fluid mixtures and demonstrate it by simulating a mixture of methane and hydrogen inside slit-like channels of graphene. We show that the EQT-cDFT predictions for the structure of the confined fluidmore » mixture compare well with the molecular dynamics simulation results. In addition, our results show that graphene slit nanopores exhibit a selective adsorption of methane over hydrogen.« less

  10. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  11. Reacting gas mixtures in the state-to-state approach: The chemical reaction rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kustova, Elena V.; Kremer, Gilberto M.

    2014-12-09

    In this work chemically reacting mixtures of viscous flows are analyzed within the framework of Boltzmann equation. By applying a modified Chapman-Enskog method to the system of Boltzmann equations general expressions for the rates of chemical reactions and vibrational energy transitions are determined as functions of two thermodynamic forces: the velocity divergence and the affinity. As an application chemically reacting mixtures of N{sub 2} across a shock wave are studied, where the first lowest vibrational states are taken into account. Here we consider only the contributions from the first four single quantum vibrational-translational energy transitions. It is shown that themore » contribution to the chemical reaction rate related to the affinity is much larger than that of the velocity divergence.« less

  12. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  13. Spatial occupancy models for large data sets

    USGS Publications Warehouse

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  14. Protonation of Different Goethite Surfaces - Unified Models for NaNO3 and NaCl Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutzenkirchen, Johannes; Boily, Jean F.; Gunneriusson, Lars

    2008-01-01

    Acid-base titration data for two goethites samples in sodium nitrate and sodium chloride media are discussed. The data are modelled based on various surface complexation models in the framework of the MUlti SIte Complexation (MUSIC) model. Various assumptions with respect to the goethite morphology are considered in determining the site density of the surface functional groups. The results from the various model applications are not statistically significant in terms of goodness of fit. More importantly, various published assumptions with respect to the goethite morphology (i.e. the contributions of different crystal planes and their repercussions on the “overall” site densities ofmore » the various surface functional groups) do not significantly affect the final model parameters. The simultaneous fit of the chloride and nitrate data results in electrolyte binding constants, which are applicable over a wide range of electrolyte concentrations including mixtures of chloride and nitrate. Model parameters for the high surface area goethite sample are in excellent agreement with parameters that were independently obtained by another group on different goethite titration data sets.« less

  15. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  16. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  17. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion.

    PubMed

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015). Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013). The delayed onset trajectory affects only a small group of individuals, that is, about 4-5% (O'Donnell, Elliott, Lau, & Creamer, 2007). In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a), we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015). We used latent growth mixture modeling (LGMM) (Van de Schoot, 2015b) to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood) and Bayesian estimation using priors (see, Depaoli, 2012, 2013). Further, we discuss where priors come from and how to define them in the estimation process. We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the Bayesian results to illustrate how to check the impact of the prior knowledge integrated into the model. We conclude with recommendations and guidelines for researchers looking to implement theory-driven LGMM, and we tailor this discussion to the context of PTSD research.

  19. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  20. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  1. Volumetric image classification using homogeneous decomposition and dictionary learning: A study using retinal optical coherence tomography for detecting age-related macular degeneration.

    PubMed

    Albarrak, Abdulrahman; Coenen, Frans; Zheng, Yalin

    2017-01-01

    Three-dimensional (3D) (volumetric) diagnostic imaging techniques are indispensable with respect to the diagnosis and management of many medical conditions. However there is a lack of automated diagnosis techniques to facilitate such 3D image analysis (although some support tools do exist). This paper proposes a novel framework for volumetric medical image classification founded on homogeneous decomposition and dictionary learning. In the proposed framework each image (volume) is recursively decomposed until homogeneous regions are arrived at. Each region is represented using a Histogram of Oriented Gradients (HOG) which is transformed into a set of feature vectors. The Gaussian Mixture Model (GMM) is then used to generate a "dictionary" and the Improved Fisher Kernel (IFK) approach is used to encode feature vectors so as to generate a single feature vector for each volume, which can then be fed into a classifier generator. The principal advantage offered by the framework is that it does not require the detection (segmentation) of specific objects within the input data. The nature of the framework is fully described. A wide range of experiments was conducted with which to analyse the operation of the proposed framework and these are also reported fully in the paper. Although the proposed approach is generally applicable to 3D volumetric images, the focus for the work is 3D retinal Optical Coherence Tomography (OCT) images in the context of the diagnosis of Age-related Macular Degeneration (AMD). The results indicate that excellent diagnostic predictions can be produced using the proposed framework. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Computational Fluid Dynamics Modeling of Macrosegregation and Shrinkage in Large-Diameter Steel Roll Castings

    NASA Astrophysics Data System (ADS)

    Nastac, Laurentiu

    2011-12-01

    Minimizing macrosegregation and shrinkage in large cast steel mill rolls challenges the limits of commercial foundry technology. Processing improvements have been achieved by balancing the total heat input of casting with the rate of heat extraction from the surface of the roll in the mold. A submerged entry nozzle (SEN) technique that injects a dilute alloy addition through a nozzle into the partially solidified net-shaped roll ingot can mitigate both centerline segregation and midradius channel segregate conditions. The objective of this study is to optimize the melt chemistry, solidification, and SEN conditions to minimize centerline and midradius segregation, and then to improve the quality of the transition region between the outer shell and the diluted interior region. To accomplish this objective, a multiphase, multicomponent computational fluid dynamics (CFD) code was developed for studying the macrosegregation and shrinkage under various casting conditions for a 65-ton, 1.6-m-diameter steel roll. The developed CFD framework consists of solving for the volume fraction of phases (air and steel mixture), temperature, flow, and solute balance in multicomponent alloy systems. Thermal boundary conditions were determined by measuring the temperature in the mold at several radial depths and height locations. The thermophysical properties including viscosity of steel alloy used in the simulations are functions of temperature. The steel mixture in the species-transfer model consists of the following elements: Fe, Mn, Si, S, P, C, Cr, Mo, and V. Density and liquidus temperature of the steel mixture are locally affected by the segregation of these elements. The model predictions were validated against macrosegregation measured from pieces cut from the 65-ton roll. The effect of key processing parameters such as melt composition and superheat of both the shell and the dilute interior alloy are addressed. The influence of mold type and thickness on macrosegregation and shrinkage also are discussed.

  3. Mixture-based gatekeeping procedures in adaptive clinical trials.

    PubMed

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  4. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  5. Transferring mixtures of chemicals from sediment to a bioassay using silicone-based passive sampling and dosing.

    PubMed

    Mustajärvi, Lukas; Eriksson-Wiklund, Ann-Kristin; Gorokhova, Elena; Jahnke, Annika; Sobek, Anna

    2017-11-15

    Environmental mixtures of chemicals consist of a countless number of compounds with unknown identity and quantity. Yet, chemical regulation is mainly built around the assessment of single chemicals. Existing frameworks for assessing the toxicity of mixtures require that both the chemical composition and quantity are known. Quantitative analyses of the chemical composition of environmental mixtures are however extremely challenging and resource-demanding. Bioassays may therefore serve as a useful approach for investigating the combined toxicity of environmental mixtures of chemicals in a cost-efficient and holistic manner. In this study, an unknown environmental mixture of bioavailable semi-hydrophobic to hydrophobic chemicals was sampled from a contaminated sediment in a coastal Baltic Sea area using silicone polydimethylsiloxane (PDMS) as an equilibrium passive sampler. The chemical mixture was transferred to a PDMS-based passive dosing system, and its applicability was demonstrated using green algae Tetraselmis suecica in a cell viability assay. The proportion of dead cells increased significantly with increasing exposure level and in a dose-response manner. At an ambient concentration, the proportion of dead cells in the population was nearly doubled compared to the control; however, the difference was non-significant due to high inter-replicate variability and a low number of replicates. The validation of the test system regarding equilibrium sampling, loading efficiency into the passive dosing polymer, stability of the mixture composition, and low algal mortality in control treatments demonstrates that combining equilibrium passive sampling and passive dosing is a promising tool for investigating the toxicity of bioavailable semi-hydrophobic and hydrophobic chemicals in complex environmental mixtures.

  6. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  7. AN INTEGRATED COMPUTATIONAL FRAMEWORK FOR THE INTERPRETATION OF ORGANOPHOSPHORUS PESTICIDE BIOMARKERS

    EPA Science Inventory

    We anticipate that the software tool developed, and targeted data acquired, will be useful in the interpretation of biomarkers indicative of exposure to OP insecticide mixtures, including the effects of population and dose variability and uncertainty. Therefore, we expect tha...

  8. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  9. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  10. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  11. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  12. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  13. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  14. A hydrogeologic framework for characterizing summer streamflow sensitivity to climate warming in the Pacific Northwest, USA

    NASA Astrophysics Data System (ADS)

    Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.

    2014-09-01

    Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.

  15. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  16. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  17. Collapse of resilience patterns in generalized Lotka-Volterra dynamics and beyond.

    PubMed

    Tu, Chengyi; Grilli, Jacopo; Schuessler, Friedrich; Suweis, Samir

    2017-06-01

    Recently, a theoretical framework aimed at separating the roles of dynamics and topology in multidimensional systems has been developed [Gao et al., Nature (London) 530, 307 (2016)10.1038/nature16948]. The validity of their method is assumed to hold depending on two main hypotheses: (i) The network determined by the the interaction between pairs of nodes has negligible degree correlations; (ii) the node activities are uniform across nodes on both the drift and the pairwise interaction functions. Moreover, the authors consider only positive (mutualistic) interactions. Here we show the conditions proposed by Gao and collaborators [Nature (London) 530, 307 (2016)10.1038/nature16948] are neither sufficient nor necessary to guarantee that their method works in general and validity of their results are not independent of the model chosen within the class of dynamics they considered. Indeed we find that a new condition poses effective limitations to their framework and we provide quantitative predictions of the quality of the one-dimensional collapse as a function of the properties of interaction networks and stable dynamics using results from random matrix theory. We also find that multidimensional reduction may work also for an interaction matrix with a mixture of positive and negative signs, opening up an application of the framework to food webs, neuronal networks, and social and economic interactions.

  18. A stress ecology framework for comprehensive risk assessment of diffuse pollution.

    PubMed

    van Straalen, Nico M; van Gestel, Cornelis A M

    2008-12-01

    Environmental pollution is traditionally classified as either localized or diffuse. Local pollution comes from a point source that emits a well-defined cocktail of chemicals, distributed in the environment in the form of a gradient around the source. Diffuse pollution comes from many sources, small and large, that cause an erratic distribution of chemicals, interacting with those from other sources into a complex mixture of low to moderate concentrations over a large area. There is no good method for ecological risk assessment of such types of pollution. We argue that effects of diffuse contamination in the field must be analysed in the wider framework of stress ecology. A multivariate approach can be applied to filter effects of contaminants from the many interacting factors at the ecosystem level. Four case studies are discussed (1) functional and structural properties of terrestrial model ecosystems, (2) physiological profiles of microbial communities, (3) detritivores in reedfield litter, and (4) benthic invertebrates in canal sediment. In each of these cases the data were analysed by multivariate statistics and associations between ecological variables and the levels of contamination were established. We argue that the stress ecology framework is an appropriate assessment instrument for discriminating effects of pollution from other anthropogenic disturbances and naturally varying factors.

  19. Collapse of resilience patterns in generalized Lotka-Volterra dynamics and beyond

    NASA Astrophysics Data System (ADS)

    Tu, Chengyi; Grilli, Jacopo; Schuessler, Friedrich; Suweis, Samir

    2017-06-01

    Recently, a theoretical framework aimed at separating the roles of dynamics and topology in multidimensional systems has been developed [Gao et al., Nature (London) 530, 307 (2016), 10.1038/nature16948]. The validity of their method is assumed to hold depending on two main hypotheses: (i) The network determined by the the interaction between pairs of nodes has negligible degree correlations; (ii) the node activities are uniform across nodes on both the drift and the pairwise interaction functions. Moreover, the authors consider only positive (mutualistic) interactions. Here we show the conditions proposed by Gao and collaborators [Nature (London) 530, 307 (2016), 10.1038/nature16948] are neither sufficient nor necessary to guarantee that their method works in general and validity of their results are not independent of the model chosen within the class of dynamics they considered. Indeed we find that a new condition poses effective limitations to their framework and we provide quantitative predictions of the quality of the one-dimensional collapse as a function of the properties of interaction networks and stable dynamics using results from random matrix theory. We also find that multidimensional reduction may work also for an interaction matrix with a mixture of positive and negative signs, opening up an application of the framework to food webs, neuronal networks, and social and economic interactions.

  20. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  2. Microporous metal organic framework [M2(hfipbb)2(ted)] (M=Zn, Co; H2hfipbb=4,4-(hexafluoroisopropylidene)-bis(benzoic acid); ted=triethylenediamine): Synthesis, structure analysis, pore characterization, small gas adsorption and CO2/N2 separation properties

    NASA Astrophysics Data System (ADS)

    Xu, William W.; Pramanik, Sanhita; Zhang, Zhijuan; Emge, Thomas J.; Li, Jing

    2013-04-01

    Carbon dioxide is a greenhouse gas that is a major contributor to global warming. Developing methods that can effectively capture CO2 is the key to reduce its emission to the atmosphere. Recent research shows that microporous metal organic frameworks (MOFs) are emerging as a promising family of adsorbents that may be promising for use in adsorption based capture and separation of CO2 from power plant waste gases. In this work we report the synthesis, crystal structure analysis and pore characterization of two microporous MOF structures, [M2(hfipbb)2(ted)] (M=Zn (1), Co (2); H2hfipbb=4,4-(hexafluoroisopropylidene)-bis(benzoic acid); ted=triethylenediamine). The CO2 and N2 adsorption experiments and IAST calculations are carried out on [Zn2(hfipbb)2(ted)] under conditions that mimic post-combustion flue gas mixtures emitted from power plants. The results show that the framework interacts with CO2 strongly, giving rise to relatively high isosteric heats of adsorption (up to 28 kJ/mol), and high adsorption selectivity for CO2 over N2, making it promising for capturing and separating CO2 from CO2/N2 mixtures.

  3. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  4. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  5. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  6. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  7. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Chemotaxis migration and morphogenesis of living colonies.

    PubMed

    Ben Amar, Martine

    2013-06-01

    Development of forms in living organisms is complex and fascinating. Morphogenetic theories that investigate these shapes range from discrete to continuous models, from the variational elasticity to time-dependent fluid approach. Here a mixture model is chosen to describe the mass transport in a morphogenetic gradient: it gives a mathematical description of a mixture involving several constituents in mechanical interactions. This model, which is highly flexible can incorporate many biological processes but also complex interactions between cells as well as between cells and their environment. We use this model to derive a free-boundary problem easier to handle analytically. We solve it in the simplest geometry: an infinite linear front advancing with a constant velocity. In all the cases investigated here as the 3 D diffusion, the increase of mitotic activity at the border, nonlinear laws for the uptake of morphogens or for the mobility coefficient, a planar front exists above a critical threshold for the mobility coefficient but it becomes unstable just above the threshold at long wavelengths due to the existence of a Goldstone mode. This explains why sparsely bacteria exhibit dendritic patterns experimentally in opposition to other colonies such as biofilms and epithelia which are more compact. In the most unstable situation, where all the laws: diffusion, chemotaxis driving and chemoattractant uptake are linear, we show also that the system can recover a dynamic stability. A second threshold for the mobility exists which has a lower value as the ratio between diffusion coefficients decreases. Within the framework of this model where the biomass is treated mainly as a viscous and diffusive fluid, we show that the multiplicity of independent parameters in real biologic experimental set-up may explain varieties of observed patterns.

  9. Modeling quiescent phase transport of air bubbles induced by breaking waves

    NASA Astrophysics Data System (ADS)

    Shi, Fengyan; Kirby, James T.; Ma, Gangfeng

    Simultaneous modeling of both the acoustic phase and quiescent phase of breaking wave-induced air bubbles involves a large range of length scales from microns to meters and time scales from milliseconds to seconds, and thus is computational unaffordable in a surfzone-scale computational domain. In this study, we use an air bubble entrainment formula in a two-fluid model to predict air bubble evolution in the quiescent phase in a breaking wave event. The breaking wave-induced air bubble entrainment is formulated by connecting the shear production at the air-water interface and the bubble number intensity with a certain bubble size spectra observed in laboratory experiments. A two-fluid model is developed based on the partial differential equations of the gas-liquid mixture phase and the continuum bubble phase, which has multiple size bubble groups representing a polydisperse bubble population. An enhanced 2-DV VOF (Volume of Fluid) model with a k - ɛ turbulence closure is used to model the mixture phase. The bubble phase is governed by the advection-diffusion equations of the gas molar concentration and bubble intensity for groups of bubbles with different sizes. The model is used to simulate air bubble plumes measured in laboratory experiments. Numerical results indicate that, with an appropriate parameter in the air entrainment formula, the model is able to predict the main features of bubbly flows as evidenced by reasonable agreement with measured void fraction. Bubbles larger than an intermediate radius of O(1 mm) make a major contribution to void fraction in the near-crest region. Smaller bubbles tend to penetrate deeper and stay longer in the water column, resulting in significant contribution to the cross-sectional area of the bubble cloud. An underprediction of void fraction is found at the beginning of wave breaking when large air pockets take place. The core region of high void fraction predicted by the model is dislocated due to use of the shear production in the algorithm for initial bubble entrainment. The study demonstrates a potential use of an entrainment formula in simulations of air bubble population in a surfzone-scale domain. It also reveals some difficulties in use of the two-fluid model for predicting large air pockets induced by wave breaking, and suggests that it may be necessary to use a gas-liquid two-phase model as the basic model framework for the mixture phase and to develop an algorithm to allow for transfer of discrete air pockets to the continuum bubble phase. A more theoretically justifiable air entrainment formulation should be developed.

  10. Financial Crisis: A New Measure for Risk of Pension Fund Portfolios

    PubMed Central

    Cadoni, Marinella; Melis, Roberta; Trudda, Alessandro

    2015-01-01

    It has been argued that pension funds should have limitations on their asset allocation, based on the risk profile of the different financial instruments available on the financial markets. This issue proves to be highly relevant at times of market crisis, when a regulation establishing limits to risk taking for pension funds could prevent defaults. In this paper we present a framework for evaluating the risk level of a single financial instrument or a portfolio. By assuming that the log asset returns can be described by a multifractional Brownian motion, we evaluate the risk using the time dependent Hurst parameter H(t) which models volatility. To provide a measure of the risk, we model the Hurst parameter with a random variable with mixture of beta distribution. We prove the efficacy of the methodology by implementing it on different risk level financial instruments and portfolios. PMID:26086529

  11. Financial Crisis: A New Measure for Risk of Pension Fund Portfolios.

    PubMed

    Cadoni, Marinella; Melis, Roberta; Trudda, Alessandro

    2015-01-01

    It has been argued that pension funds should have limitations on their asset allocation, based on the risk profile of the different financial instruments available on the financial markets. This issue proves to be highly relevant at times of market crisis, when a regulation establishing limits to risk taking for pension funds could prevent defaults. In this paper we present a framework for evaluating the risk level of a single financial instrument or a portfolio. By assuming that the log asset returns can be described by a multifractional Brownian motion, we evaluate the risk using the time dependent Hurst parameter H(t) which models volatility. To provide a measure of the risk, we model the Hurst parameter with a random variable with mixture of beta distribution. We prove the efficacy of the methodology by implementing it on different risk level financial instruments and portfolios.

  12. Study of blood flow in several benchmark micro-channels using a two-fluid approach.

    PubMed

    Wu, Wei-Tao; Yang, Fang; Antaki, James F; Aubry, Nadine; Massoudi, Mehrdad

    2015-10-01

    It is known that in a vessel whose characteristic dimension (e.g., its diameter) is in the range of 20 to 500 microns, blood behaves as a non-Newtonian fluid, exhibiting complex phenomena, such as shear-thinning, stress relaxation, and also multi-component behaviors, such as the Fahraeus effect, plasma-skimming, etc. For describing these non-Newtonian and multi-component characteristics of blood, using the framework of mixture theory, a two-fluid model is applied, where the plasma is treated as a Newtonian fluid and the red blood cells (RBCs) are treated as shear-thinning fluid. A computational fluid dynamic (CFD) simulation incorporating the constitutive model was implemented using OpenFOAM® in which benchmark problems including a sudden expansion and various driven slots and crevices were studied numerically. The numerical results exhibited good agreement with the experimental observations with respect to both the velocity field and the volume fraction distribution of RBCs.

  13. Recommender system based on scarce information mining.

    PubMed

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  15. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  16. Unravelling the influence of carbon dioxide on the adsorptive recovery of butanol from fermentation broth using ITQ-29 and ZIF-8.

    PubMed

    Martin-Calvo, Ana; Van der Perre, Stijn; Claessens, Benjamin; Calero, Sofia; Denayer, Joeri F M

    2018-04-18

    The vapor phase adsorption of butanol from ABE fermentation at the head space of the fermenter is an interesting route for the efficient recovery of biobutanol. The presence of gases such as carbon dioxide that are produced during the fermentation process causes a stripping of valuable compounds from the aqueous into the vapor phase. This work studies the effect of the presence of carbon dioxide on the adsorption of butanol at a molecular level. With this aim in mind Monte Carlo simulations were employed to study the adsorption of mixtures containing carbon dioxide, butanol and ethanol. Molecular models for butanol and ethanol that reproduce experimental properties of the molecules such as polarity, vapor-liquid coexistence or liquid density have been developed. Pure component isotherms and heats of adsorption have been computed and compared to experimental data to check the accuracy of the interacting parameters. Adsorption of butanol/ethanol mixtures has been studied in absence and presence of CO2 on two representative materials, a pure silica LTA zeolite and a hydrophobic metal-organic framework ZIF-8. To get a better understanding of the molecular mechanism that governs the adsorption of the targeted mixture in the selected materials, the distribution of the molecules inside the structures was analyzed. The combination of these features allows obtaining a deeper understanding of the process and to identify the role of carbon dioxide in the butanol purification process.

  17. Risk assessment of three fluoroquinolone antibiotics in the groundwater recharge system.

    PubMed

    Chen, Guoli; Liu, Xiang; Tartakevosky, Daniel; Li, Miao

    2016-11-01

    Three fluoroquinolone antibiotics agents (FQs) in groundwater and reclaimed water have been investigated in Changzhou and Beijing, China. The occurrence of ofloxacin (OFL), enrofloxacin (ENR) and norfloxacin (NOR) is in nanograms per liter and has 100% frequency. The concentration order of FQs in reclaimed water is NOR>OFL>ENR, whilst the order in groundwater is NOR>ENR>OFL. And then the single and mixture adsorption-desorption have been studied and showed that (i) silty clay loam has higher sorption capacity than loamy sand, (ii) competitive adsorption exists when the three selected FQs coexist, (iii) ENR has a significantly priority sorption to NOR, whilst OFL has a least sorption among the mixture, (iv) there is no significant difference between the desorption results of mixture and the indivdual compound in relatively low concentration, (v) the formed chemical bonds and the irreversible combination of adsorption point are the significant influential factors for explaining desorption hysteresis of the selected FQs. Based on the above study, transport model and risk quotient have been performed, and the calculated risk quotient reveals that: (i) the selected FQs risk order in reclaimed water is OFL>ENR>NOR, (ii) in groundwater, OFL and ENR pose a higher risk than NOR no matter whether considering the long time groundwater recharge. This study will help policy makers to decide which FQs need to be covered in the priority substance lists defined in legislative frameworks. Copyright © 2016. Published by Elsevier Inc.

  18. Predicting the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol mixtures via molecular simulation.

    PubMed

    Paluch, Andrew S; Parameswaran, Sreeja; Liu, Shuai; Kolavennu, Anasuya; Mobley, David L

    2015-01-28

    We present a general framework to predict the excess solubility of small molecular solids (such as pharmaceutical solids) in binary solvents via molecular simulation free energy calculations at infinite dilution with conventional molecular models. The present study used molecular dynamics with the General AMBER Force Field to predict the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol solvents. The simulations are able to predict the existence of solubility enhancement and the results are in good agreement with available experimental data. The accuracy of the predictions in addition to the generality of the method suggests that molecular simulations may be a valuable design tool for solvent selection in drug development processes.

  19. Predicting the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol mixtures via molecular simulation

    NASA Astrophysics Data System (ADS)

    Paluch, Andrew S.; Parameswaran, Sreeja; Liu, Shuai; Kolavennu, Anasuya; Mobley, David L.

    2015-01-01

    We present a general framework to predict the excess solubility of small molecular solids (such as pharmaceutical solids) in binary solvents via molecular simulation free energy calculations at infinite dilution with conventional molecular models. The present study used molecular dynamics with the General AMBER Force Field to predict the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol solvents. The simulations are able to predict the existence of solubility enhancement and the results are in good agreement with available experimental data. The accuracy of the predictions in addition to the generality of the method suggests that molecular simulations may be a valuable design tool for solvent selection in drug development processes.

  20. Numerical-experimental investigation of PE/EVA foam injection molded parts

    NASA Astrophysics Data System (ADS)

    Spina, Roberto

    The main objective of the presented work is to propose a robust framework to test foaming injection molded parts, with the aim of establishing a standard testing cycle for the evaluation of a new foam material based on numerical and experimental results. The research purpose is to assess parameters influencing several aspects, such as foam morphology and compression behavior, using useful suggestions from finite element analysis. The investigated polymeric blend consisted of a mixture of low density polyethylenes (LDPEs), a high-density polyethylene (HDPE), an ethylene-vinyl acetate (EVA) and an azodicarbonamide (ADC). The thermal, rheological and compression properties of the blend are fully described, as well as the numerical models and the parameters of the injection molding process.

  1. Kinetic theory of two-temperature polyatomic plasmas

    NASA Astrophysics Data System (ADS)

    Orlac'h, Jean-Maxime; Giovangigli, Vincent; Novikova, Tatiana; Roca i Cabarrocas, Pere

    2018-03-01

    We investigate the kinetic theory of two-temperature plasmas for reactive polyatomic gas mixtures. The Knudsen number is taken proportional to the square root of the mass ratio between electrons and heavy-species, and thermal non-equilibrium between electrons and heavy species is allowed. The kinetic non-equilibrium framework also requires a weak coupling between electrons and internal energy modes of heavy species. The zeroth-order and first-order fluid equations are derived by using a generalized Chapman-Enskog method. Expressions for transport fluxes are obtained in terms of macroscopic variable gradients and the corresponding transport coefficients are expressed as bracket products of species perturbed distribution functions. The theory derived in this paper provides a consistent fluid model for non-thermal multicomponent plasmas.

  2. Discovery of optimal zeolites for challenging separations and chemical conversions through predictive materials modeling

    NASA Astrophysics Data System (ADS)

    Siepmann, J. Ilja; Bai, Peng; Tsapatsis, Michael; Knight, Chris; Deem, Michael W.

    2015-03-01

    Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure and the type or location of active sites. To date, 213 framework types have been synthesized and >330000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol beyond the ethanol/water azeotropic concentration in a single separation step from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modeling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds. Financial support from the Department of Energy Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences under Award DE-FG02-12ER16362 is gratefully acknowledged.

  3. Highly effective hydrogen isotope separation in nanoporous metal-organic frameworks with open metal sites: direct measurement and theoretical analysis.

    PubMed

    Oh, Hyunchul; Savchenko, Ievgeniia; Mavrandonakis, Andreas; Heine, Thomas; Hirscher, Michael

    2014-01-28

    Separating gaseous mixtures that consist of very similar size is one of the critical issues in modern separation technology. Especially, the separation of the isotopes hydrogen and deuterium requires special efforts, even though these isotopes show a very large mass ratio. Conventionally, H/D separation can be realized through cryogenic distillation of the molecular species or the Girdler-sulfide process, which are among the most energy-intensive separation techniques in the chemical industry. However, costs can be significantly reduced by using highly mass-selective nanoporous sorbents. Here, we describe a hydrogen isotope separation strategy exploiting the strongly attractive open metal sites present in nanoporous metal-organic frameworks of the CPO-27 family (also referred to as MOF-74). A theoretical analysis predicts an outstanding hydrogen isotopologue separation at open metal sites due to isotopal effects, which has been directly observed through cryogenic thermal desorption spectroscopy. For H2/D2 separation of an equimolar mixture at 60 K, the selectivity of 12 is the highest value ever measured, and this methodology shows extremely high separation efficiencies even above 77 K. Our theoretical results imply also a high selectivity for HD/H2 separation at similar temperatures, and together with catalytically active sites, we propose a mechanism to produce D2 from HD/H2 mixtures with natural or enriched deuterium content.

  4. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  5. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  6. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  7. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  8. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  9. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  10. Consequences of Misspecifying the Number of Latent Treatment Attendance Classes in Modeling Group Membership Turnover within Ecologically-Valid Behavioral Treatment Trials

    PubMed Central

    Morgan-Lopez, Antonio A.; Fals-Stewart, William

    2015-01-01

    Historically, difficulties in analyzing treatment outcome data from open enrollment groups have led to their avoidance in use in federally-funded treatment trials, despite the fact that 79% of treatment programs use open enrollment groups. Recently, latent class pattern mixture models (LCPMM) have shown promise as a defensible approach for making overall (and attendance class-specific) inferences from open enrollment groups with membership turnover. We present a statistical simulation study comparing LCPMMs to longitudinal growth models (LGM) to understand when both frameworks are likely to produce conflicting inferences concerning overall treatment efficacy. LCPMMs performed well under all conditions examined; meanwhile LGMs produced problematic levels of bias and Type I errors under two joint conditions: moderate-to-high dropout (30–50%) and treatment by attendance class interactions exceeding Cohen's d ≈.2. This study highlights key concerns about using LGM for open enrollment data: treatment effect overestimation and advocacy for treatments that may be ineffective in reality. PMID:18513917

  11. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  12. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  13. The AOP framework and causality: Meeting chemical risk assessment challenges in the 21st century

    EPA Science Inventory

    Chemical safety assessments are expanding from a focus on a few chemicals (or chemical mixtures) to the broader “universe” of thousands, if not hundreds of thousands of substances that potentially could impact humans or the environment. This is exemplified in ...

  14. Reservoir Computing Beyond Memory-Nonlinearity Trade-off.

    PubMed

    Inubushi, Masanobu; Yoshimura, Kazuyuki

    2017-08-31

    Reservoir computing is a brain-inspired machine learning framework that employs a signal-driven dynamical system, in particular harnessing common-signal-induced synchronization which is a widely observed nonlinear phenomenon. Basic understanding of a working principle in reservoir computing can be expected to shed light on how information is stored and processed in nonlinear dynamical systems, potentially leading to progress in a broad range of nonlinear sciences. As a first step toward this goal, from the viewpoint of nonlinear physics and information theory, we study the memory-nonlinearity trade-off uncovered by Dambre et al. (2012). Focusing on a variational equation, we clarify a dynamical mechanism behind the trade-off, which illustrates why nonlinear dynamics degrades memory stored in dynamical system in general. Moreover, based on the trade-off, we propose a mixture reservoir endowed with both linear and nonlinear dynamics and show that it improves the performance of information processing. Interestingly, for some tasks, significant improvements are observed by adding a few linear dynamics to the nonlinear dynamical system. By employing the echo state network model, the effect of the mixture reservoir is numerically verified for a simple function approximation task and for more complex tasks.

  15. Inference of the phase-to-mechanical property link via coupled X-ray spectrometry and indentation analysis: Application to cement-based materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krakowiak, Konrad J.; Wilson, William; James, Simon

    2015-01-15

    A novel approach for the chemo-mechanical characterization of cement-based materials is presented, which combines the classical grid indentation technique with elemental mapping by scanning electron microscopy-energy dispersive X-ray spectrometry (SEM-EDS). It is illustrated through application to an oil-well cement system with siliceous filler. The characteristic X-rays of major elements (silicon, calcium and aluminum) are measured over the indentation region and mapped back on the indentation points. Measured intensities together with indentation hardness and modulus are considered in a clustering analysis within the framework of Finite Mixture Models with Gaussian component density function. The method is able to successfully isolate themore » calcium-silica-hydrate gel at the indentation scale from its mixtures with other products of cement hydration and anhydrous phases; thus providing a convenient means to link mechanical response to the calcium-to-silicon ratio quantified independently via X-ray wavelength dispersive spectroscopy. A discussion of uncertainty quantification of the estimated chemo-mechanical properties and phase volume fractions, as well as the effect of chemical observables on phase assessment is also included.« less

  16. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  17. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  18. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  19. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  20. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  1. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  3. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  5. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  6. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  7. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  8. Development of Viscosity Model for Petroleum Industry Applications

    NASA Astrophysics Data System (ADS)

    Motahhari, Hamed reza

    Heavy oil and bitumen are challenging to produce and process due to their very high viscosity, but their viscosity can be reduced either by heating or dilution with a solvent. Given the key role of viscosity, an accurate viscosity model suitable for use with reservoir and process simulators is essential. While there are several viscosity models for natural gases and conventional oils, a compositional model applicable to heavy petroleum and diluents is lacking. The objective of this thesis is to develop a general compositional viscosity model that is applicable to natural gas mixtures, conventional crudes oils, heavy petroleum fluids, and their mixtures with solvents and other crudes. The recently developed Expanded Fluid (EF) viscosity correlation was selected as a suitable compositional viscosity model for petroleum applications. The correlation relates the viscosity of the fluid to its density over a broad range of pressures and temperatures. The other inputs are pressure and the dilute gas viscosity. Each fluid is characterized for the correlation by a set of fluid-specific parameters which are tuned to fit data. First, the applicability of the EF correlation was extended to asymmetric mixtures and liquid mixtures containing dissolved gas components. A new set of mass-fraction based mixing rules was developed to calculate the fluid-specific parameters for mixtures. The EF correlation with the new set of mixing rules predicted the viscosity of over 100 mixtures of hydrocarbon compounds and carbon dioxide with overall average absolute relative deviations (AARD) of less than 10% either with measured densities or densities estimated by Advanced Peng-Robinson equation of state (APR EoS). To improve the viscosity predictions with APR EoS-estimated densities, general correlations were developed for non-zero viscosity binary interaction parameters. The EF correlation was extended to non-hydrocarbon compounds typically encountered in natural gas industry. It was demonstrated that the framework of the correlation is valid for these compounds, except for compounds with strong hydrogen bonding such as water. A temperature dependency was introduced into the correlation for strongly hydrogen bonding compounds. The EF correlation fit the viscosity data of pure non-hydrocarbon compounds with AARDs below 6% and predicted the viscosity of sour and sweet natural gases and aqueous solutions of organic alcohols with overall AARDs less than 9%. An internally consistent estimation method was also developed to calculate the fluid-specific parameters for hydrocarbons when no experimental viscosity data are available. The method correlates the fluid-specific parameters to the molecular weight and specific gravity. The method was evaluated against viscosity data of over 250 pure hydrocarbon compounds and petroleum distillations cuts. The EF correlation predictions were found to be within the same order of magnitude of the measurements with an overall AARD of 31%. A methodology was then proposed to apply the EF viscosity correlation to crude oils characterized as mixtures of the defined components and pseudo-components. The above estimation methods are used to calculate the fluid-specific parameters for pseudo-components. Guidelines are provided for tuning of the correlation to available viscosity data, calculating the dilute gas viscosities, and improving the densities calculated with the Peng-Robinson EoS. The viscosities of over 10 dead and live crude oils and bitumen were predicted within a factor of 3 of the measured values using the measured density of the oils as the input. It was shown that single parameter tuning of the model improved the viscosity prediction to within 30% of the measured values. Finally, the performance of the EF correlation was evaluated for diluted heavy oils and bitumens. The required density and viscosity data were collected for over 20 diluted dead and live bitumen mixtures using an in-house capillary viscometer also equipped with an in-line density-meter at temperatures and pressures up to 175 °C and 10 MPa. The predictions of the correlation were found within the same order of magnitude of the measured values with overall AARDs less than 20%. It was shown that the predictions of the correlation with generalized non-zero interaction parameters for the solvent-oil pairs were improved to overall AARDs less than 10%.

  9. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  10. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  11. Thermal behavior of crumb-rubber modified asphalt concrete mixtures

    NASA Astrophysics Data System (ADS)

    Epps, Amy Louise

    Thermal cracking is one of the primary forms of distress in asphalt concrete pavements, resulting from either a single drop in temperature to an extreme low or from multiple temperature cycles above the fracture temperature of the asphalt-aggregate mixture. The first mode described is low temperature cracking; the second is thermal fatigue. The addition of crumb-rubber, manufactured from scrap tires, to the binder in asphalt concrete pavements has been suggested to minimize both types of thermal cracking. Four experiments were designed and completed to evaluate the thermal behavior of crumb-rubber modified (CRM) asphalt-aggregate mixtures. Modified and unmodified mixture response to thermal stresses was measured in four laboratory tests. The Thermal Stress Restrained Specimen Test (TSRST) and the Indirect Tensile Test (IDT) were used to compare mixture resistance to low temperature cracking. Modified mixtures showed improved performance, and cooling rate did not affect mixture resistance according to the statistical analysis. Therefore results from tests with faster rates can predict performance under slower field rates. In comparison, predicted fracture temperatures and stresses (IDT) were generally higher than measured values (TSRST). In addition, predicted fracture temperatures from binder test results demonstrated that binder testing alone is not sufficient to evaluate CRM mixtures. Thermal fatigue was explored in the third experiment using conventional load-induced fatigue tests with conditions selected to simulate daily temperature fluctuations. Test results indicated that thermal fatigue may contribute to transverse cracking in asphalt pavements. Both unmodified and modified mixtures had a finite capacity to withstand daily temperature fluctuations coupled with cold temperatures. Modified mixtures again exhibited improved performance. The fourth experiment examined fracture properties of modified and unmodified mixtures using a common fracture toughness test. Results showed no effect from modification, but the small experiment size may have masked this effect. Reliability concepts were introduced to include risk and uncertainty in a comparison of mixture response measured in the laboratory and estimated environmental conditions. This comparison provided evidence that CRM mixtures exhibit improved resistance to both types of thermal cracking at high levels of reliability. In conclusion, a mix design and analysis framework for evaluating thermal behavior was recommended.

  12. Simple effective rule to estimate the jamming packing fraction of polydisperse hard spheres.

    PubMed

    Santos, Andrés; Yuste, Santos B; López de Haro, Mariano; Odriozola, Gerardo; Ogarko, Vitaliy

    2014-04-01

    A recent proposal in which the equation of state of a polydisperse hard-sphere mixture is mapped onto that of the one-component fluid is extrapolated beyond the freezing point to estimate the jamming packing fraction ϕJ of the polydisperse system as a simple function of M1M3/M22, where Mk is the kth moment of the size distribution. An analysis of experimental and simulation data of ϕJ for a large number of different mixtures shows a remarkable general agreement with the theoretical estimate. To give extra support to the procedure, simulation data for seventeen mixtures in the high-density region are used to infer the equation of state of the pure hard-sphere system in the metastable region. An excellent collapse of the inferred curves up to the glass transition and a significant narrowing of the different out-of-equilibrium glass branches all the way to jamming are observed. Thus, the present approach provides an extremely simple criterion to unify in a common framework and to give coherence to data coming from very different polydisperse hard-sphere mixtures.

  13. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  14. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  15. Blue intensity matters for cell cycle profiling in fluorescence DAPI-stained images.

    PubMed

    Ferro, Anabela; Mestre, Tânia; Carneiro, Patrícia; Sahumbaiev, Ivan; Seruca, Raquel; Sanches, João M

    2017-05-01

    In the past decades, there has been an amazing progress in the understanding of the molecular mechanisms of the cell cycle. This has been possible largely due to a better conceptualization of the cycle itself, but also as a consequence of technological advances. Herein, we propose a new fluorescence image-based framework targeted at the identification and segmentation of stained nuclei with the purpose to determine DNA content in distinct cell cycle stages. The method is based on discriminative features, such as total intensity and area, retrieved from in situ stained nuclei by fluorescence microscopy, allowing the determination of the cell cycle phase of both single and sub-population of cells. The analysis framework was built on a modified k-means clustering strategy and refined with a Gaussian mixture model classifier, which enabled the definition of highly accurate classification clusters corresponding to G1, S and G2 phases. Using the information retrieved from area and fluorescence total intensity, the modified k-means (k=3) cluster imaging framework classified 64.7% of the imaged nuclei, as being at G1 phase, 12.0% at G2 phase and 23.2% at S phase. Performance of the imaging framework was ascertained with normal murine mammary gland cells constitutively expressing the Fucci2 technology, exhibiting an overall sensitivity of 94.0%. Further, the results indicate that the imaging framework has a robust capacity to both identify a given DAPI-stained nucleus to its correct cell cycle phase, as well as to determine, with very high probability, true negatives. Importantly, this novel imaging approach is a non-disruptive method that allows an integrative and simultaneous quantitative analysis of molecular and morphological parameters, thus awarding the possibility of cell cycle profiling in cytological and histological samples.

  16. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds.

    PubMed

    Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L

    2011-06-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.

  17. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    PubMed Central

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  18. Mathematical modeling of erythrocyte chimerism informs genetic intervention strategies for sickle cell disease.

    PubMed

    Altrock, Philipp M; Brendel, Christian; Renella, Raffaele; Orkin, Stuart H; Williams, David A; Michor, Franziska

    2016-09-01

    Recent advances in gene therapy and genome-engineering technologies offer the opportunity to correct sickle cell disease (SCD), a heritable disorder caused by a point mutation in the β-globin gene. The developmental switch from fetal γ-globin to adult β-globin is governed in part by the transcription factor (TF) BCL11A. This TF has been proposed as a therapeutic target for reactivation of γ-globin and concomitant reduction of β-sickle globin. In this and other approaches, genetic alteration of a portion of the hematopoietic stem cell (HSC) compartment leads to a mixture of sickling and corrected red blood cells (RBCs) in periphery. To reverse the sickling phenotype, a certain proportion of corrected RBCs is necessary; the degree of HSC alteration required to achieve a desired fraction of corrected RBCs remains unknown. To address this issue, we developed a mathematical model describing aging and survival of sickle-susceptible and normal RBCs; the former can have a selective survival advantage leading to their overrepresentation. We identified the level of bone marrow chimerism required for successful stem cell-based gene therapies in SCD. Our findings were further informed using an experimental mouse model, where we transplanted mixtures of Berkeley SCD and normal murine bone marrow cells to establish chimeric grafts in murine hosts. Our integrative theoretical and experimental approach identifies the target frequency of HSC alterations required for effective treatment of sickling syndromes in humans. Our work replaces episodic observations of such target frequencies with a mathematical modeling framework that covers a large and continuous spectrum of chimerism conditions. Am. J. Hematol. 91:931-937, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  20. Tetraquark mixing framework for isoscalar resonances in light mesons

    NASA Astrophysics Data System (ADS)

    Kim, Hungchong; Kim, K. S.; Cheoun, Myung-Ki; Oka, Makoto

    2018-05-01

    Recently, a tetraquark mixing framework has been proposed for light mesons and applied more or less successfully to the isovector resonances, a0(980 ) , a0(1450 ) , as well as to the isodoublet resonances, K0*(800 ),K0*(1430 ). In this work, we present a more extensive view on the mixing framework and apply this framework to the isoscalar resonances, f0(500 ), f0(980 ), f0(1370 ), f0(1500 ). Tetraquarks in this framework can have two spin configurations containing either spin-0 diquark or spin-1 diquark and each configuration forms a nonet in flavor space. The two spin configurations are found to mix strongly through the color-spin interactions. Their mixtures, which diagonalize the hyperfine masses, can generate the physical resonances constituting two nonets, which, in fact, coincide roughly with the experimental observation. We identify that f0(500 ), f0(980 ) are the isoscalar members in the light nonet, and f0(1370 ), f0(1500 ) are the similar members in the heavy nonet. This means that the spin configuration mixing, as it relates the corresponding members in the two nonets, can generate f0(500 ) , f0(1370 ) among the members in light mass, and f0(980 ) , f0(1500 ) in heavy mass. The complication arises because the isoscalar members of each nonet are subject to an additional flavor mixing known as Okubo-Zweig-Iizuka rule so that f0(500 ) , f0(980 ) , and similarly f0(1370 ) , f0(1500 ) , are the mixture of two isoscalar members belonging to an octet and a singlet in SUf(3 ) . The tetraquark mixing framework including the flavor mixing is tested for the isoscalar resonances in terms of the mass splitting and the fall-apart decay modes. The mass splitting among the isoscalar resonances is found to be consistent qualitatively with their hyperfine mass splitting strongly driven by the spin configuration mixing, which suggests that the tetraquark mixing framework works. The fall-apart modes from our tetraquarks also seem to be consistent with the experimental modes. We also discuss possible existence of the spin-1 tetraquarks that can be constructed by the spin-1 diquark.

  1. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  2. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  3. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  4. Numerical Modelling of Staged Combustion Aft-Injected Hybrid Rocket Motors

    NASA Astrophysics Data System (ADS)

    Nijsse, Jeff

    The staged combustion aft-injected hybrid (SCAIH) rocket motor is a promising design for the future of hybrid rocket propulsion. Advances in computational fluid dynamics and scientific computing have made computational modelling an effective tool in hybrid rocket motor design and development. The focus of this thesis is the numerical modelling of the SCAIH rocket motor in a turbulent combustion, high-speed, reactive flow framework accounting for solid soot transport and radiative heat transfer. The SCAIH motor is modelled with a shear coaxial injector with liquid oxygen injected in the center at sub-critical conditions: 150 K and 150 m/s (Mach ≈ 0.9), and a gas-generator gas-solid mixture of one-third carbon soot by mass injected in the annual opening at 1175 K and 460 m/s (Mach ≈ 0.6). Flow conditions in the near injector region and the flame anchoring mechanism are of particular interest. Overall, the flow is shown to exhibit instabilities and the flame is shown to anchor directly on the injector faceplate with temperatures in excess of 2700 K.

  5. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  6. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    NASA Astrophysics Data System (ADS)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  7. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  8. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  9. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  10. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  11. Vapor-Liquid Equilibria Using the Gibbs Energy and the Common Tangent Plane Criterion

    ERIC Educational Resources Information Center

    Olaya, Maria del Mar; Reyes-Labarta, Juan A.; Serrano, Maria Dolores; Marcilla, Antonio

    2010-01-01

    Phase thermodynamics is often perceived as a difficult subject with which many students never become fully comfortable. The Gibbsian geometrical framework can help students to gain a better understanding of phase equilibria. An exercise to interpret the vapor-liquid equilibrium of a binary azeotropic mixture, using the equilibrium condition based…

  12. IN UTERO EXPOSURE TO AN AR ANTAGONIST PLUS AN INHIBITOR OF FETAL TESTOSTERONE SYNTHESIS INDUCESCUMULATIVE EFFECTS ON F1 MALE RATS

    EPA Science Inventory

    Risk assessments are typically conducted on a chemical-by-chemical basis; however, many regulatory bodies are developing frameworks for assessing the cumulative risk of chemical mixtures of chemicals. The current investigation examined how chemicals that disrupt rat sex different...

  13. Commentary: Is the Air Pollution Health Research Community Prepared to Support a Multipollutant Air Quality Management Framework?

    EPA Science Inventory

    Ambient air pollution is always encountered as a complex mixture, but past regulatory and research strategies largely focused on single pollutants, pollutant classes, and sources one-at-a-time. There is a trend toward managing air quality in a progressively “multipollutant” manne...

  14. CO2 splitting by H2O to CO and O2 under UV light in TiMCM-41silicate sieve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Wenyong; Han, Hongxian; Frei, Heinz

    2004-04-06

    The 266 nm light-induced reaction of CO{sub 2} and H{sub 2}O gas mixtures (including isotopic modifications {sup 13}CO{sub 2}, C{sup 18}O{sub 2}, and D{sub 2}O) in framework TiMCM-41 silicate sieve was monitored by in-situ FT-IR spectroscopy at room temperature. Carbon monoxide gas was observed as the sole product by infrared, and the growth was found to depend linearly on the photolysis laser power. H{sub 2}O was confirmed as stoichiometric electron donor. The work establishes CO as the single photon, 2-electron transfer product of CO{sub 2} photoreduction by H{sub 2}O at framework Ti centers for the first time. O{sub 2} wasmore » detected as co-product by mass spectrometric analysis of the photolysis gas mixture. These results are explained by single UV photon-induced splitting of CO{sub 2} by H{sub 2}O to CO and surface OH radical.« less

  15. Photocatalyzed Hydrogen Evolution from Water by a Composite Catalyst of NH2 -MIL-125(Ti) and Surface Nickel(II) Species.

    PubMed

    Meyer, Kim; Bashir, Shahid; Llorca, Jordi; Idriss, Hicham; Ranocchiari, Marco; van Bokhoven, Jeroen A

    2016-09-19

    A composite of the metal-organic framework (MOF) NH 2 -MIL-125(Ti) and molecular and ionic nickel(II) species, catalyzed hydrogen evolution from water under UV light. In 95 v/v % aqueous conditions the composite produced hydrogen in quantities two orders of magnitude higher than that of the virgin framework and an order of magnitude greater than that of the molecular catalyst. In a 2 v/v % water and acetonitrile mixture, the composite demonstrated a TOF of 28 mol H 2  g(Ni) -1  h -1 and remained active for up to 50 h, sustaining catalysis for three times longer and yielding 20-fold the amount of hydrogen. Appraisal of physical mixtures of the MOF and each of the nickel species under identical photocatalytic conditions suggest that similar surface localized light sensitization and proton reduction processes operate in the composite catalyst. Both nickel species contribute to catalytic conversion, although different activation behaviors are observed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Adsorptive Separation of Methanol-Acetone on Isostructural Series of Metal-Organic Frameworks M-BTC (M = Ti, Fe, Cu, Co, Ru, Mo): A Computational Study of Adsorption Mechanisms and Metal-Substitution Impacts.

    PubMed

    Wu, Ying; Chen, Huiyong; Xiao, Jing; Liu, Defei; Liu, Zewei; Qian, Yu; Xi, Hongxia

    2015-12-09

    The adsorptive separation properties of M-BTC isostructural series (M = Ti, Fe, Cu, Co, Ru, Mo) for methanol-acetone mixtures were investigated by using various computational procedures of grand canonical Monte Carlo simulations (GCMC), density functional theory (DFT), and ideal adsorbed solution theory (IAST), following with comprehensive understanding of adsorbate-metal interactions on the adsorptive separation behaviors. The obtained results showed that the single component adsorptions were driven by adsorbate-framework interactions at low pressures and by framework structures at high pressures, among which the mass effects, electrostatics, and geometric accessibility of the metal sites also played roles. In the case of methanol-acetone separation, the selectivity of methanol on M-BTCs decreased with rising pressures due to the pressure-dependent separation mechanisms: the cooperative effects between methanol and acetone hindered the separation at low pressures, whereas the competitive effects of acetone further resulted in the lower selectivity at high pressures. Among these M-BTCs, Ti and Fe analogues exhibited the highest thermodynamic methanol/acetone selectivity, making them promising for adsorptive methanol/acetone separation processes. The investigation provides mechanistic insights on how the nature of metal centers affects the adsorption properties of MOFs, and will further promote the rational design of new MOF materials for effective gas mixture separation.

  17. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  18. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Engineering chiral porous metal-organic frameworks for enantioselective adsorption and separation

    NASA Astrophysics Data System (ADS)

    Peng, Yongwu; Gong, Tengfei; Zhang, Kang; Lin, Xiaochao; Liu, Yan; Jiang, Jianwen; Cui, Yong

    2014-07-01

    The separation of racemic molecules is of substantial significance not only for basic science but also for technical applications, such as fine chemicals and drug development. Here we report two isostructural chiral metal-organic frameworks decorated with chiral dihydroxy or -methoxy auxiliares from enantiopure tetracarboxylate-bridging ligands of 1,1‧-biphenol and a manganese carboxylate chain. The framework bearing dihydroxy groups functions as a solid-state host capable of adsorbing and separating mixtures of a range of chiral aromatic and aliphatic amines, with high enantioselectivity. The host material can be readily recycled and reused without any apparent loss of performance. The utility of the present adsorption separation is demonstrated in the large-scale resolution of racemic 1-phenylethylamine. Control experiments and molecular simulations suggest that the chiral recognition and separation are attributed to the different orientations and specific binding energies of the enantiomers in the microenvironment of the framework.

  20. Synthesis, characterizations and catalytic studies of a new two-dimensional metal-organic framework based on Co-carboxylate secondary building units

    NASA Astrophysics Data System (ADS)

    Bagherzadeh, Mojtaba; Ashouri, Fatemeh; Đaković, Marijana

    2015-03-01

    A metal-organic framework [Co3(BDC)3(DMF)2(H2O)2] was synthesized and structurally characterized. X-ray single crystal analysis revealed that the framework contains a 2D polymeric chain through coordination of 1,4-benzenedicarboxylic acid linker ligand to cobalt centers. The polymer crystallize in monoclinic P21/n space group with a=13.989(3) Å, b=9.6728(17) Å, c=16.707(3) Å, and Z=2. The polymer features a framework based on the perfect octahedral Co-O6 secondary building units. The catalytic activities of [Co3(BDC)3(DMF)2(H2O)2]n for olefins oxidation was conducted. The heterogeneous catalyst could be facilely separated from the reaction mixture, and reused three times without significant degradation in catalytic activity. Furthermore, no contribution from homogeneous catalysis of active species leaching into reaction solution was detected.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  2. Estimating reaction rate coefficients within a travel-time modeling framework.

    PubMed

    Gong, R; Lu, C; Wu, W-M; Cheng, H; Gu, B; Watson, D; Jardine, P M; Brooks, S C; Criddle, C S; Kitanidis, P K; Luo, J

    2011-01-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  3. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, R; Lu, C; Luo, Jian

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transportmore » over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.« less

  4. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  5. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  6. Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures.

    PubMed

    Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A

    2018-07-01

    The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.

  7. Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties

    NASA Astrophysics Data System (ADS)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.

  8. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  9. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  10. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  11. Colloidal stability of tannins: astringency, wine tasting and beyond

    NASA Astrophysics Data System (ADS)

    Zanchi, D.; Poulain, C.; Konarev, P.; Tribet, C.; Svergun, D. I.

    2008-12-01

    Tannin-tannin and tannin-protein interactions in water-ethanol solvent mixtures are studied in the context of red wine tasting. While tannin self-aggregation is relevant for the visual aspect of wine tasting (limpidity and related colloidal phenomena), tannin affinities for salivary proline-rich proteins is fundamental for a wide spectrum of organoleptic properties related to astringency. Tannin-tannin interactions are analyzed in water-ethanol wine-like solvents and the precipitation map is constructed for a typical grape tannin. The interaction between tannins and human salivary proline-rich proteins (PRP) are investigated in the framework of the shell model for micellization, known for describing tannin-induced aggregation of β-casein. Tannin-assisted micellization and compaction of proteins observed by SAXS are described quantitatively and discussed in the case of astringency.

  12. Predicting the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol mixtures via molecular simulation

    PubMed Central

    Paluch, Andrew S.; Parameswaran, Sreeja; Liu, Shuai; Kolavennu, Anasuya; Mobley, David L.

    2015-01-01

    We present a general framework to predict the excess solubility of small molecular solids (such as pharmaceutical solids) in binary solvents via molecular simulation free energy calculations at infinite dilution with conventional molecular models. The present study used molecular dynamics with the General AMBER Force Field to predict the excess solubility of acetanilide, acetaminophen, phenacetin, benzocaine, and caffeine in binary water/ethanol solvents. The simulations are able to predict the existence of solubility enhancement and the results are in good agreement with available experimental data. The accuracy of the predictions in addition to the generality of the method suggests that molecular simulations may be a valuable design tool for solvent selection in drug development processes. PMID:25637996

  13. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  14. Thermodynamic Modeling of Organic-Inorganic Aerosols with the Group-Contribution Model AIOMFAC

    NASA Astrophysics Data System (ADS)

    Zuend, A.; Marcolli, C.; Luo, B. P.; Peter, T.

    2009-04-01

    Liquid aerosol particles are - from a physicochemical viewpoint - mixtures of inorganic salts, acids, water and a large variety of organic compounds (Rogge et al., 1993; Zhang et al., 2007). Molecular interactions between these aerosol components lead to deviations from ideal thermodynamic behavior. Strong non-ideality between organics and dissolved ions may influence the aerosol phases at equilibrium by means of liquid-liquid phase separations into a mainly polar (aqueous) and a less polar (organic) phase. A number of activity models exists to successfully describe the thermodynamic equilibrium of aqueous electrolyte solutions. However, the large number of different, often multi-functional, organic compounds in mixed organic-inorganic particles is a challenging problem for the development of thermodynamic models. The group-contribution concept as introduced in the UNIFAC model by Fredenslund et al. (1975), is a practical method to handle this difficulty and to add a certain predictability for unknown organic substances. We present the group-contribution model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients), which explicitly accounts for molecular interactions between solution constituents, both organic and inorganic, to calculate activities, chemical potentials and the total Gibbs energy of mixed systems (Zuend et al., 2008). This model enables the computation of vapor-liquid (VLE), liquid-liquid (LLE) and solid-liquid (SLE) equilibria within one framework. Focusing on atmospheric applications we considered eight different cations, five anions and a wide range of alcohols/polyols as organic compounds. With AIOMFAC, the activities of the components within an aqueous electrolyte solution are very well represented up to high ionic strength. We show that the semi-empirical middle-range parametrization of direct organic-inorganic interactions in alcohol-water-salt solutions enables accurate computations of vapor-liquid and liquid-liquid equilibria. References Fredenslund, A., Jones, R. L., and Prausnitz, J. M.: Group-Contribution Estimation of Activity Coefficients in Nonideal Liquid Mixtures, AIChE J., 21, 1086-1099, 1975. Rogge, W. F., Mazurek, M. A., Hildemann, L. M., Cass, G. R., and Simoneit, B. R. T.: Quantification of Urban Organic Aerosols at a Molecular Level: Identification, Abundance and Seasonal Variation, Atmos. Environ., 27, 1309-1330, 1993. Zhang, Q. et al.: Ubiquity and dominance of oxygenated species in organic aerosols in anthropogenically influenced Northern Hemisphere midlatitudes, Geophys. Res. Lett., 34, L13 801, 2007. Zuend, A., Marcolli, C., Luo, B. P., and Peter, T.: A thermodynamic model of mixed organic-inorganic aerosols to predict activity coefficients, Atmos. Chem. Phys., 8, 4559-4593, 2008.

  15. Analytical framework for reconstructing heterogeneous environmental variables from mammal community structure.

    PubMed

    Louys, Julien; Meloro, Carlo; Elton, Sarah; Ditchfield, Peter; Bishop, Laura C

    2015-01-01

    We test the performance of two models that use mammalian communities to reconstruct multivariate palaeoenvironments. While both models exploit the correlation between mammal communities (defined in terms of functional groups) and arboreal heterogeneity, the first uses a multiple multivariate regression of community structure and arboreal heterogeneity, while the second uses a linear regression of the principal components of each ecospace. The success of these methods means the palaeoenvironment of a particular locality can be reconstructed in terms of the proportions of heavy, moderate, light, and absent tree canopy cover. The linear regression is less biased, and more precisely and accurately reconstructs heavy tree canopy cover than the multiple multivariate model. However, the multiple multivariate model performs better than the linear regression for all other canopy cover categories. Both models consistently perform better than randomly generated reconstructions. We apply both models to the palaeocommunity of the Upper Laetolil Beds, Tanzania. Our reconstructions indicate that there was very little heavy tree cover at this site (likely less than 10%), with the palaeo-landscape instead comprising a mixture of light and absent tree cover. These reconstructions help resolve the previous conflicting palaeoecological reconstructions made for this site. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters.

    PubMed

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-02-28

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10 -10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale.

  17. Upscaling Cement Paste Microstructure to Obtain the Fracture, Shear, and Elastic Concrete Mechanical LDPM Parameters

    PubMed Central

    Sherzer, Gili; Gao, Peng; Schlangen, Erik; Ye, Guang; Gal, Erez

    2017-01-01

    Modeling the complex behavior of concrete for a specific mixture is a challenging task, as it requires bridging the cement scale and the concrete scale. We describe a multiscale analysis procedure for the modeling of concrete structures, in which material properties at the macro scale are evaluated based on lower scales. Concrete may be viewed over a range of scale sizes, from the atomic scale (10−10 m), which is characterized by the behavior of crystalline particles of hydrated Portland cement, to the macroscopic scale (10 m). The proposed multiscale framework is based on several models, including chemical analysis at the cement paste scale, a mechanical lattice model at the cement and mortar scales, geometrical aggregate distribution models at the mortar scale, and the Lattice Discrete Particle Model (LDPM) at the concrete scale. The analysis procedure starts from a known chemical and mechanical set of parameters of the cement paste, which are then used to evaluate the mechanical properties of the LDPM concrete parameters for the fracture, shear, and elastic responses of the concrete. Although a macroscopic validation study of this procedure is presented, future research should include a comparison to additional experiments in each scale. PMID:28772605

  18. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  19. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  1. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  2. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  4. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  5. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  6. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  7. Palladium-catalyzed hydrodehalogenation of 1,2,4,5-tetrachlorobenzene in water-ethanol mixtures.

    PubMed

    Wee, Hun-Young; Cunningham, Jeffrey A

    2008-06-30

    Palladium-catalyzed hydrodehalogenation (HDH) was applied for destroying 1,2,4,5-tetrachlorobenzene (TeCB) in mixtures of water and ethanol. This investigation was performed as a critical step in the development of a new technology for clean-up of soil contaminated by halogenated hydrophobic organic contaminants. The main goals of the investigation were to demonstrate the feasibility of the technology, to determine the effect of the solvent composition (water:ethanol ratio), and to develop a model for the kinetics of the dehalogenation process. All experiments were conducted in a batch reactor at ambient temperature under mild hydrogen pressure. The experimental results are all consistent with a Langmuir-Hinshelwood model for heterogeneous catalysis. Major findings that can be interpreted within the Langmuir-Hinshelwood framework include: (1) the rate of hydrodehalogenation depends strongly on the solvent composition, increasing as the water fraction of the solvent increases; (2) the HDH rate increases as the catalyst concentration in the reactor increases; (3) when enough catalyst is present, the HDH reaction appears to follow first-order kinetics, but the kinetics appear to be zero-order at low catalyst concentrations. TeCB is converted rapidly and quantitatively to benzene, with only trace concentrations of 1,2,4-trichlorobenzene appearing as a reactive intermediate. The results obtained here have important implications for the further development of the proposed soil remediation technology, and may also be important for the treatment of other hazardous waste streams.

  8. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  9. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  11. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  12. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  13. [The possibility of using the synthetic compound for the purpose of modeling of the human soft tissues in connection with the evaluation of gunshot damages].

    PubMed

    Latyshov, I V; Vasil'ev, V A; Zaporotskova, I V; Ermakova, T A

    The necessity of using a simulator of human soft tissues for the purpose of criminalistic and forensic medical expertises is dictated by the requirements put forward by the expert practice. The objective of the present study was to develop a synthetic simulator of the human soft tissues (compound) to ensure reliability of comparative criminalistics and forensic medical studies for the evaluation of gunshot injuries. The synthetic compound was prepared by mixing up the petroleum and/or synthetic oil with a polymeric thickening agent. This procedure was followed by heating the mixture at 90 degrees Celsius during 5 hours. Thereafter, petrolatum and/or ceresin and/or paraffin were added to the mixture. At the final stage, ionol was introduced, and the mixture was poured into a mold measuring 70×70×210 mm with its subsequent cooling down to 40 degrees Celsius during 10-12 hours. The experimental shooting was effected from the Kalashnikov AKS-74U assault rifle using the 5.45×39 mm (7H6) cartridges, Makarov pistol using the 9×18 mm cartridges and Nagant pistol using the CHELP-1000 cartridges. Five shots were fired from each of the three models. The experimental gunshot damages were evaluated visually by examining the inlet and exit openings and the bullet channel. In addition, criminalistic analysis of the grooves in cartridges was carried out. The technology for the fabrication of synthetic compounds based on ethylene, propylene, and butadiene co-polymers in the combination with such low molecular weight compounds as paraffins and ceresins having a homogeneous structure makes it possible to vary the rheological and mechanical properties of the simulators of human soft tissues for the solution of diagnostic and identification problems in the framework of criminalistics and forensic medical expertises.

  14. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  15. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  16. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  17. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  18. Evaluation of B&W UO2/ThO2 VIII experimental core: criticality and thermal disadvantage factor analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlo Parisi; Emanuele Negrenti

    2017-02-01

    In the framework of the OECD/NEA International Reactor Physics Experiment (IRPHE) Project, an evaluation of core VIII of the Babcock & Wilcox (B&W) Spectral Shift Control Reactor (SSCR) critical experiment program was performed. The SSCR concept, moderated and cooled by a variable mixture of heavy and light water, envisaged changing of the thermal neutron spectrum during the operation to encourage breeding and to sustain the core criticality. Core VIII contained 2188 fuel rods with 93% enriched UO2-ThO2 fuel in a moderator mixture of heavy and light water. The criticality experiment and measurements of the thermal disadvantage factor were evaluated.

  19. Stability of faults with heterogeneous friction properties and effective normal stress

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul

    2018-05-01

    Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.

  20. Risk Assessment in the 21st Century - Conference Abstract ...

    EPA Pesticide Factsheets

    For the past ~50 years, risk assessment depended almost exclusively on animal testing for hazard identification and dose-response assessment. Originally sound and effective, with increasing dependence on chemical tools and the number of chemicals in commerce, this traditional approach is no longer sufficient. This presentation provides an update on current progress in achieving the goals outlined in the NAS reports: “Toxicology Testing in the 21st Century”, “Exposure Science in the 21st Century”, and most recently, “Using 21st Century Science to Improve Risk-Related Evaluations.” The presentation highlights many of the advances lead by the EPA. Topics covered include the evolution of the mode of action concept into the chemically agnostic, adverse outcome pathway (AOP), a systems-based data framework that facilitates integration of modifiable factors (e.g., genetic variation, life stages), and an understanding of networks, and mixtures. Further, the EDSP pivot is used to illustrate how AOPs drive development of predictive models for risk assessment based on assembly of high throughput assays representing AOP key elements. The birth of computational exposure science, capable of large-scale predictive exposure models, is reviewed. Although still in its infancy, development of non-targeted analysis to begin addressing the exposome is presented, as is the systems-based AEP that integrates exposure, toxicokinetics and AOPs into a comprehensive framework

Top