Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating and modeling the cure fraction in population-based cancer survival analysis.
Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W
2007-07-01
In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.
Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity
NASA Astrophysics Data System (ADS)
Chen, Hsieh; Panagiotopoulos, Athanassios Z.
2018-01-01
We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models
ERIC Educational Resources Information Center
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol
2016-01-01
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification
ERIC Educational Resources Information Center
Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi
2012-01-01
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
Beta Regression Finite Mixture Models of Polarization and Priming
ERIC Educational Resources Information Center
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay
2011-01-01
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Mesoscale Modeling of LX-17 Under Isentropic Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springer, H K; Willey, T M; Friedman, G
Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Mixture theory-based poroelasticity as a model of interstitial tissue growth.
Cowin, Stephen C; Cardoso, Luis
2012-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.
NASA Astrophysics Data System (ADS)
Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de
2018-03-01
Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
A general mixture model and its application to coastal sandbar migration simulation
NASA Astrophysics Data System (ADS)
Liang, Lixin; Yu, Xiping
2017-04-01
A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.
Prediction of the properties anhydrite construction mixtures based on neural network approach
NASA Astrophysics Data System (ADS)
Fedorchuk, Y. M.; Zamyatin, N. V.; Smirnov, G. V.; Rusina, O. N.; Sadenova, M. A.
2017-08-01
The article considered the question of applying the backstop modeling mechanism from the components of anhydride mixtures in the process of managing the technological processes of receiving construction products which based on fluoranhydrite.
Feder, Paul I; Ma, Zhenxu J; Bull, Richard J; Teuschler, Linda K; Rice, Glenn
2009-01-01
In chemical mixtures risk assessment, the use of dose-response data developed for one mixture to estimate risk posed by a second mixture depends on whether the two mixtures are sufficiently similar. While evaluations of similarity may be made using qualitative judgments, this article uses nonparametric statistical methods based on the "bootstrap" resampling technique to address the question of similarity among mixtures of chemical disinfectant by-products (DBP) in drinking water. The bootstrap resampling technique is a general-purpose, computer-intensive approach to statistical inference that substitutes empirical sampling for theoretically based parametric mathematical modeling. Nonparametric, bootstrap-based inference involves fewer assumptions than parametric normal theory based inference. The bootstrap procedure is appropriate, at least in an asymptotic sense, whether or not the parametric, distributional assumptions hold, even approximately. The statistical analysis procedures in this article are initially illustrated with data from 5 water treatment plants (Schenck et al., 2009), and then extended using data developed from a study of 35 drinking-water utilities (U.S. EPA/AMWA, 1989), which permits inclusion of a greater number of water constituents and increased structure in the statistical models.
Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna
2009-06-01
The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.
Kinetics of methane production from the codigestion of switchgrass and Spirulina platensis algae.
El-Mashad, Hamed M
2013-03-01
Anaerobic batch digestion of four feedstocks was conducted at 35 and 50 °C: switchgrass; Spirulina platensis algae; and two mixtures of both switchgrass and S. platensis. Mixture 1 was composed of 87% switchgrass (based on volatile solids) and 13% S. platensis. Mixture 2 was composed of 67% switchgrass and 33% S. platensis. The kinetics of methane production from these feedstocks was studied using four first order models: exponential, Gompertz, Fitzhugh, and Cone. The methane yields after 40days of digestion at 35 °C were 355, 127, 143 and 198 ml/g VS, respectively for S. platensis, switchgrass, and Mixtures 1 and 2, while the yields at 50 °C were 358, 167, 198, and 236 ml/g VS, respectively. Based on Akaike's information criterion, the Cone model best described the experimental data. The Cone model was validated with experimental data collected from the digestion of a third mixture that was composed of 83% switchgrass and 17% S. platensis. Published by Elsevier Ltd.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Yu, Kezi; Quirk, J Gerald; Djurić, Petar M
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models
Yu, Kezi; Quirk, J. Gerald
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
DOT National Transportation Integrated Search
2018-01-01
This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...
Physiologically based pharmacokinetic modeling of tea catechin mixture in rats and humans.
Law, Francis C P; Yao, Meicun; Bi, Hui-Chang; Lam, Stephen
2017-06-01
Although green tea ( Camellia sinensis) (GT) contains a large number of polyphenolic compounds with anti-oxidative and anti-proliferative activities, little is known of the pharmacokinetics and tissue dose of tea catechins (TCs) as a chemical mixture in humans. The objectives of this study were to develop and validate a physiologically based pharmacokinetic (PBPK) model of tea catechin mixture (TCM) in rats and humans, and to predict an integrated or total concentration of TCM in the plasma of humans after consuming GT or Polyphenon E (PE). To this end, a PBPK model of epigallocatechin gallate (EGCg) consisting of 13 first-order, blood flow-limited tissue compartments was first developed in rats. The rat model was scaled up to humans by replacing its physiological parameters, pharmacokinetic parameters and tissue/blood partition coefficients (PCs) with human-specific values. Both rat and human EGCg models were then extrapolated to other TCs by substituting its physicochemical parameters, pharmacokinetic parameters, and PCs with catechin-specific values. Finally, a PBPK model of TCM was constructed by linking three rat (or human) tea catechin models together without including a description for pharmacokinetic interaction between the TCs. The mixture PBPK model accurately predicted the pharmacokinetic behaviors of three individual TCs in the plasma of rats and humans after GT or PE consumption. Model-predicted total TCM concentration in the plasma was linearly related to the dose consumed by humans. The mixture PBPK model is able to translate an external dose of TCM into internal target tissue doses for future safety assessment and dose-response analysis studies in humans. The modeling framework as described in this paper is also applicable to the bioactive chemical in other plant-based health products.
Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.
Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M
1998-01-01
Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Measurement Of Multiphase Flow Water Fraction And Water-cut
NASA Astrophysics Data System (ADS)
Xie, Cheng-gang
2007-06-01
This paper describes a microwave transmission multiphase flow water-cut meter that measures the amplitude attenuation and phase shift across a pipe diameter at multiple frequencies using cavity-backed antennas. The multiphase flow mixture permittivity and conductivity are derived from a unified microwave transmission model for both water- and oil-continuous flows over a wide water-conductivity range; this is far beyond the capability of microwave-resonance-based sensors currently on the market. The water fraction and water cut are derived from a three-component gas-oil-water mixing model using the mixture permittivity or the mixture conductivity and an independently measured mixture density. Water salinity variations caused, for example, by changing formation water or formation/injection water breakthrough can be detected and corrected using an online water-conductivity tracking technique based on the interpretation of the mixture permittivity and conductivity, simultaneously measured by a single-modality microwave sensor.
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
NASA Astrophysics Data System (ADS)
Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping
2012-09-01
Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
Population heterogeneity in the salience of multiple risk factors for adolescent delinquency.
Lanza, Stephanie T; Cooper, Brittany R; Bray, Bethany C
2014-03-01
To present mixture regression analysis as an alternative to more standard regression analysis for predicting adolescent delinquency. We demonstrate how mixture regression analysis allows for the identification of population subgroups defined by the salience of multiple risk factors. We identified population subgroups (i.e., latent classes) of individuals based on their coefficients in a regression model predicting adolescent delinquency from eight previously established risk indices drawn from the community, school, family, peer, and individual levels. The study included N = 37,763 10th-grade adolescents who participated in the Communities That Care Youth Survey. Standard, zero-inflated, and mixture Poisson and negative binomial regression models were considered. Standard and mixture negative binomial regression models were selected as optimal. The five-class regression model was interpreted based on the class-specific regression coefficients, indicating that risk factors had varying salience across classes of adolescents. Standard regression showed that all risk factors were significantly associated with delinquency. Mixture regression provided more nuanced information, suggesting a unique set of risk factors that were salient for different subgroups of adolescents. Implications for the design of subgroup-specific interventions are discussed. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G
2015-10-06
Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
M. M. Clark; T. H. Fletcher; R. R. Linn
2010-01-01
The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixtureâ fraction model relying on thermodynamic...
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham
2016-03-15
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Based on previous research on the acute toxicity of major ions (Na+, K+, Ca2+, Mg2+, Cl, SO42, and HCO3/CO32) to C. dubia, two mathematical models were developed for predicting the LC50 for any ion mixture, excluding those dominated by K toxicity. One model addresses a mechanism...
Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.
Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava
2016-09-01
Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.
Extending the Distributed Lag Model framework to handle chemical mixtures.
Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris
2017-07-01
Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Detecting Math Anxiety with a Mixture Partial Credit Model
ERIC Educational Resources Information Center
Ölmez, Ibrahim Burak; Cohen, Allan S.
2017-01-01
The purpose of this study was to investigate a new methodology for detection of differences in middle grades students' math anxiety. A mixture partial credit model analysis revealed two distinct latent classes based on homogeneities in response patterns within each latent class. Students in Class 1 had less anxiety about apprehension of math…
The main objectives of this study were to: (1) determine whether dissimilar antiandrogenic compounds display additive effects when present in combination and (2) to assess the ability of modelling approaches to accurately predict these mixture effects based on data from single ch...
Detecting Social Desirability Bias Using Factor Mixture Models
ERIC Educational Resources Information Center
Leite, Walter L.; Cooper, Lou Ann
2010-01-01
Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…
Separation mechanism of nortriptyline and amytriptyline in RPLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritti, Fabrice; Guiochon, Georges A
2005-08-01
The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less
Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han
2011-09-01
The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.
Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures
NASA Astrophysics Data System (ADS)
Dadzie, S. Kokou
2012-10-01
We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.
Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia
2013-05-30
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
Mixture optimization for mixed gas Joule-Thomson cycle
NASA Astrophysics Data System (ADS)
Detlor, J.; Pfotenhauer, J.; Nellis, G.
2017-12-01
An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.
A Study of Soil and Duricrust Models for Mars
NASA Technical Reports Server (NTRS)
Bishop, Janice L.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
This project includes analysis of the Mars Pathfinder soil data (spectral, chemical and magnetic) together with analog materials and the products of laboratory alteration experiments in order to describe possible mechanisms for the formation of soil, duricrust and rock coatings on Mars. Soil analog mixtures have been prepared, characterized and tested through wet/dry cycling experiments for changes in binding and spectroscopic properties that are related to what could be expected for duricrusts on Mars. The smectite-based mixture exhibited significantly greater changes (1) in its binding properties throughout the wet/dry cycling experiments than did the palagonite-based mixture, and (2) in its spectral properties following grinding and resieving of the hardened material than did the palagonite-based mixture.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Howdeshell, Kembra L; Hotchkiss, Andrew K; Gray, L Earl
2017-03-01
Toxicological studies of defined chemical mixtures assist human health risk assessment by establishing how chemicals interact with one another to induce an effect. This paper reviews how antiandrogenic chemical mixtures can alter reproductive tract development in rats with a focus on the reproductive toxicant phthalates. The reviewed studies compare observed mixture data to mathematical mixture model predictions based on dose addition or response addition to determine how the individual chemicals in a mixture interact (e.g., additive, greater, or less than additive). Phthalate mixtures were observed to act in a dose additive manner based on the relative potency of the individual phthalates to suppress fetal testosterone production. Similar dose additive effects have been reported for mixtures of phthalates with antiandrogenic pesticides of differing mechanisms of action. Overall, data from these phthalate experiments in rats can be used in conjunction with human biomonitoring data to determine individual hazard indices, and recent cumulative risk assessments in humans indicate an excess risk to antiandrogenic chemical mixtures that include phthalates only or phthalates in combination with other antiandrogenic chemicals. Published by Elsevier GmbH.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
Santos, Radleigh G.; Appel, Jon R.; Giulianotti, Marc A.; Edwards, Bruce S.; Sklar, Larry A.; Houghten, Richard A.; Pinilla, Clemencia
2014-01-01
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays. PMID:23722730
A Gaussian Mixture Model-based continuous Boundary Detection for 3D sensor networks.
Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji
2010-01-01
This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node's spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains.
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
NASA Astrophysics Data System (ADS)
Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard
2016-08-01
Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Mixture toxicity revisited from a toxicogenomic perspective.
Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I
2012-03-06
The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to establish quantitative relationships between dose and time dependency of responses and (ii) to adopt mixture toxicity models. Moreover, (iii) utilization of novel bioinformatic tools and (iv) stress response concepts could be productive to translate multiple responses into hypotheses on the relationships between general stress and specific toxicity reactions of organisms.
The simultaneous mass and energy evaporation (SM2E) model.
Choudhary, Rehan; Klauda, Jeffery B
2016-01-01
In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.
Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering
2005-08-04
describe a four-band magnetic resonance image (MRI) consisting of 23,712 pixels of a brain with a tumor 2. Because of the size of the dataset, it is not...the Royal Statistical Society, Series B 56, 363–375. Figueiredo, M. A. T. and A. K. Jain (2002). Unsupervised learning of finite mixture models. IEEE...20 5.4 Brain MRI
Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui
2014-05-01
To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
Neale, Peta A; Leusch, Frederic D L; Escher, Beate I
2017-04-01
Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Research on Bayes matting algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang
2015-12-01
The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.
PLUME-MoM 1.0: a new 1-D model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-05-01
In this paper a new mathematical model for volcanic plumes, named PlumeMoM, is presented. The model describes the steady-state 1-D dynamics of the plume in a 3-D coordinate system, accounting for continuous variability in particle distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. Proper description of such a multiparticle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of properties of the continuous size-distribution of the particles. This is achieved by formulation of fundamental transport equations for the multiparticle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables investigation of the response of four key output variables (mean and standard deviation (SD) of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and SD) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution.
Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teng, S.; Tebby, C.
Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
GROMOS polarizable charge-on-spring models for liquid urea: COS/U and COS/U2
NASA Astrophysics Data System (ADS)
Lin, Zhixiong; Bachmann, Stephan J.; van Gunsteren, Wilfred F.
2015-03-01
Two one-site polarizable urea models, COS/U and COS/U2, based on the charge-on-spring model are proposed. The models are parametrized against thermodynamic properties of urea-water mixtures in combination with the polarizable COS/G2 and COS/D2 models for liquid water, respectively, and have the same functional form of the inter-atomic interaction function and are based on the same parameter calibration procedure and type of experimental data as used to develop the GROMOS biomolecular force field. Thermodynamic, dielectric, and dynamic properties of urea-water mixtures simulated using the polarizable models are closer to experimental data than using the non-polarizable models. The COS/U and COS/U2 models may be used in biomolecular simulations of protein denaturation.
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
The effect of air entrapment on the performance of squeeze film dampers: Experiments and analysis
NASA Astrophysics Data System (ADS)
Diaz Briceno, Sergio Enrique
Squeeze film dampers (SFDs) are an effective means to introduce the required damping in rotor-bearing systems. They are a standard application in jet engines and are commonly used in industrial compressors. Yet, lack of understanding of their operation has confined the design of SFDs to a costly trial and error process based on prior experience. The main factor deterring the success of analytical models for the prediction of SFDs' performance lays on the modeling of the dynamic film rupture. Usually, the cavitation models developed for journal bearings are applied to SFDs. Yet, the characteristic motion of the SFD results in the entrapment of air into the oil film, thus producing a bubbly mixture that can not be represented by these models. In this work, an extensive experimental study establishes qualitatively and---for the first time---quantitatively the differences between operation with vapor cavitation and with air entrainment. The experiments show that most operating conditions lead to air entrainment and demonstrate the paramount effect it has on the performance of SFDs, evidencing the limitation of currently available models. Further experiments address the operation of SFDs with controlled bubbly mixtures. These experiments bolster the possibility of modeling air entrapment by representing the lubricant as a homogeneous mixture of air and oil and provide a reliable data base for benchmarking such a model. An analytical model is developed based on a homogeneous mixture assumption and where the bubbles are described by the Rayleigh-Plesset equation. Good agreement is obtained between this model and the measurements performed in the SFD operating with controlled mixtures. A complementary analytical model is devised to estimate the amount of air entrained from the balance of axial flows in the film. A combination of the analytical models for prediction of the air volume fraction and of the hydrodynamic pressures renders promising results for prediction of the performance of SFDs with freely entrained air. The results of this work are of immediate engineering applicability. Furthermore, they represent a firm step to advance the understanding on the effects of air entrapment in the performance of SFD.
Application of Biologically-Based Lumping To Investigate the ...
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic
Trajectories of Heroin Addiction: Growth Mixture Modeling Results Based on a 33-Year Follow-Up Study
ERIC Educational Resources Information Center
Hser, Yih-Ing; Huang, David; Chou, Chih-Ping; Anglin, M. Douglas
2007-01-01
This study investigates trajectories of heroin use and subsequent consequences in a sample of 471 male heroin addicts who were admitted to the California Civil Addict Program in 1964-1965 and followed over 33 years. Applying a two-part growth mixture modeling strategy to heroin use level during the first 16 years of the addiction careers since…
Integral equation model for warm and hot dense mixtures.
Starrett, C E; Saumon, D; Daligault, J; Hamel, S
2014-09-01
In a previous work [C. E. Starrett and D. Saumon, Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one atom in a plasma is determined using a density-functional-theory-based average-atom (AA) model and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e., mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.
Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956
Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.
Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R
2016-08-15
Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.
On an interface of the online system for a stochastic analysis of the varied information flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; MIREA, MGUPI; Kuzmin, Victor Yu.
The article describes a possible approach to the construction of an interface of an online asynchronous system that allows researchers to analyse varied information flows. The implemented stochastic methods are based on the mixture models and the method of moving separation of mixtures. The general ideas of the system functionality are demonstrated on an example for some moments of a finite normal mixture.
Determining of migraine prognosis using latent growth mixture models.
Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan
2011-04-01
This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.
Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.
Kiadaliri, Aliasghar A; Englund, Martin
2016-10-04
The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P < 0.001). All predictions but the CEC predictions of mixture model had a narrower range than the observed scores (22 to 90 %). All algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.
Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat
2012-09-01
Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modern Methods for Modeling Change in Obesity Research in Nursing.
Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E
2017-08-01
Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.
Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method
NASA Astrophysics Data System (ADS)
Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.
2018-03-01
The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.
Catalytic effects of inorganic acids on the decomposition of ammonium nitrate.
Sun, Jinhua; Sun, Zhanhui; Wang, Qingsong; Ding, Hui; Wang, Tong; Jiang, Chuansheng
2005-12-09
In order to evaluate the catalytic effects of inorganic acids on the decomposition of ammonium nitrate (AN), the heat releases of decomposition or reaction of pure AN and its mixtures with inorganic acids were analyzed by a heat flux calorimeter C80. Through the experiments, the different reaction mechanisms of AN and its mixtures were analyzed. The chemical reaction kinetic parameters such as reaction order, activation energy and frequency factor were calculated with the C80 experimental results for different samples. Based on these parameters and the thermal runaway models (Semenov and Frank-Kamenestkii model), the self-accelerating decomposition temperatures (SADTs) of AN and its mixtures were calculated and compared. The results show that the mixtures of AN with acid are more unsteady than pure AN. The AN decomposition reaction is catalyzed by acid. The calculated SADTs of AN mixtures with acid are much lower than that of pure AN.
Chemical mixtures in potable water in the U.S.
Ryker, Sarah J.
2014-01-01
In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.
Asinari, Pietro
2009-11-01
A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
NASA Astrophysics Data System (ADS)
Portnova, N. M.; Smirnov, Yu B.
2017-11-01
A theoretical model for calculation of heat transfer during condensation of multicomponent vapor-gas mixtures on vertical surfaces, based on film theory and heat and mass transfer analogy is proposed. Calculations were performed for the conditions implemented in experimental studies of heat transfer during condensation of steam-gas mixtures in the passive safety systems of PWR-type reactors of different designs. Calculated values of heat transfer coefficients for condensation of steam-air, steam-air-helium and steam-air-hydrogen mixtures at pressures of 0.2 to 0.6 MPa and of steam-nitrogen mixture at the pressures of 0.4 to 2.6 MPa were obtained. The composition of mixtures and vapor-to-surface temperature difference were varied within wide limits. Tube length ranged from 0.65 to 9.79m. The condensation of all steam-gas mixtures took place in a laminar-wave flow mode of condensate film and turbulent free convection in the diffusion boundary layer. The heat transfer coefficients obtained by calculation using the proposed model are in good agreement with the considered experimental data for both the binary and ternary mixtures.
A physiologically-based pharmacokinetic (PBPK) model for a mixture of N-methyl carbamate pesticides was developed based on single chemical models. The model was used to compare urinary metabolite concentrations to levels from National Health and Nutrition Examination Survey (NHA...
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.
Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong
2017-01-01
Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.
Jia, Yonghong; Gao, Zhihai; Wei, Huaidong
2017-01-01
Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Romine, Jason G.; Perry, Russell W.; Johnston, Samuel V.; Fitzer, Christopher W.; Pagliughi, Stephen W.; Blake, Aaron R.
2013-01-01
Mixture models proved valuable as a means to differentiate between salmonid smolts and predators that consumed salmonid smolts. However, successful application of this method requires that telemetered fishes and their predators exhibit measurable differences in movement behavior. Our approach is flexible, allows inclusion of multiple track statistics and improves upon rule-based manual classification methods.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C
2012-09-01
Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.
The Response of Lemna minor to Mixtures of Pesticides That Are Commonly Used in Thailand.
Tagun, Rungnapa; Boxall, Alistair B A
2018-04-01
In the field, aquatic organisms are exposed to multiple contaminants rather than to single compounds. It is therefore important to understand the toxic interactions of co-occurring substances in the environment. The aim of the study was to assess the effects of individual herbicides (atrazine, 2,4-D, alachlor and paraquat) that are commonly used in Thailand and their mixtures on Lemna minor. Plants were exposed to individual and binary mixtures for 7 days and the effects on plant growth rate were assesed based on frond area measurements. Experimental observations of mixture toxicity were compared with predictions based on single herbicide exposure data using concentration addition and independent action models. The single compound studies showed that paraquat and alachlor were most toxic to L. minor, followed by atrazine and then 2,4-D. For the mixtures, atrazine with 2,4-D appeared to act antagonistically, whereas alachlor and paraquat showed synergism.
Reduced-order modellin for high-pressure transient flow of hydrogen-natural gas mixture
NASA Astrophysics Data System (ADS)
Agaie, Baba G.; Khan, Ilyas; Alshomrani, Ali Saleh; Alqahtani, Aisha M.
2017-05-01
In this paper the transient flow of hydrogen compressed-natural gas (HCNG) mixture which is also referred to as hydrogen-natural gas mixture in a pipeline is numerically computed using the reduced-order modelling technique. The study on transient conditions is important because the pipeline flows are normally in the unsteady state due to the sudden opening and closure of control valves, but most of the existing studies only analyse the flow in the steady-state conditions. The mathematical model consists in a set of non-linear conservation forms of partial differential equations. The objective of this paper is to improve the accuracy in the prediction of the HCNG transient flow parameters using the Reduced-Order Modelling (ROM). The ROM technique has been successfully used in single-gas and aerodynamic flow problems, the gas mixture has not been done using the ROM. The study is based on the velocity change created by the operation of the valves upstream and downstream the pipeline. Results on the flow characteristics, namely the pressure, density, celerity and mass flux are based on variations of the mixing ratio and valve reaction and actuation time; the ROM computational time cost advantage are also presented.
Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye
2016-02-01
Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An introduction to mixture item response theory models.
De Ayala, R J; Santiago, S Y
2017-02-01
Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...
Modeling biofiltration of VOC mixtures under steady-state conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baltzis, B.C.; Wojdyla, S.M.; Zarook, S.M.
1997-06-01
Treatment of air streams contaminated with binary volatile organic compound (VOC) mixtures in classical biofilters under steady-state conditions of operation was described with a general mathematical model. The model accounts for potential kinetic interactions among the pollutants, effects of oxygen availability on biodegradation, and biomass diversification in the filter bed. While the effects of oxygen were always taken into account, two distinct cases were considered for the experimental model validation. The first involves kinetic interactions, but no biomass differentiation, used for describing data from biofiltration of benzene/toluene mixtures. The second case assumes that each pollutant is treated by a differentmore » type of biomass. Each biomass type is assumed to form separate patches of biofilm on the solid packing material, thus kinetic interference does not occur. This model was used for describing biofiltration of ethanol/butanol mixtures. Experiments were performed with classical biofilters packed with mixtures of peat moss and perlite (2:3, volume:volume). The model equations were solved through the use of computer codes based on the fourth-order Runge-Kutta technique for the gas-phase mass balances and the method of orthogonal collocation for the concentration profiles in the biofilms. Good agreement between model predictions and experimental data was found in almost all cases. Oxygen was found to be extremely important in the case of polar VOCs (ethanol/butanol).« less
Mechanism-based classification of PAH mixtures to predict carcinogenic potential
Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; ...
2015-04-22
We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[ a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[ def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profilesmore » measured in skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.« less
Mechanism-Based Classification of PAH Mixtures to Predict Carcinogenic Potential.
Tilton, Susan C; Siddens, Lisbeth K; Krueger, Sharon K; Larkin, Andrew J; Löhr, Christiane V; Williams, David E; Baird, William M; Waters, Katrina M
2015-07-01
We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway-based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP, or environmental PAH mixtures (Mix 1-3) following a 2-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC > BaP = Mix2 = Mix3 > Mix1 = Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared with tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (P < .05) for DNA damage, apoptosis, response to chemical stimulus, and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. These data further provide a 'source-to-outcome' model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action-based risk assessment could be employed for environmental PAH mixtures. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mennucci, Benedetta; da Silva, Clarissa O
2008-06-05
A computational strategy based on quantum mechanical (QM) calculations and continuum solvation models is used to investigate the structure of liquids (either neat liquids or mixtures). The strategy is based on the comparison of calculated and experimental spectroscopic properties (IR-Raman vibrational frequencies and Raman intensities). In particular, neat formamide, neat acetonitrile, and their equimolar mixture are studied comparing isolated and solvated clusters of different nature and size. In all cases, the study seems to indicate that liquids, even when strongly associated, can be effectively modeled in terms of a shell-like system in which clusters of strongly interacting molecules (the microenvironments) are solvated by a polarizable macroenvironment represented by the rest of the molecules. Only taking into proper account both these effects can a correct picture of the liquid structure be achieved.
Highly selective condensation of biomass-derived methyl ketones as a source of aviation fuel.
Sacia, Eric R; Balakrishnan, Madhesan; Deaner, Matthew H; Goulas, Konstantinos A; Toste, F Dean; Bell, Alexis T
2015-05-22
Aviation fuel (i.e., jet fuel) requires a mixture of C9 -C16 hydrocarbons having both a high energy density and a low freezing point. While jet fuel is currently produced from petroleum, increasing concern with the release of CO2 into the atmosphere from the combustion of petroleum-based fuels has led to policy changes mandating the inclusion of biomass-based fuels into the fuel pool. Here we report a novel way to produce a mixture of branched cyclohexane derivatives in very high yield (>94 %) that match or exceed many required properties of jet fuel. As starting materials, we use a mixture of n-alkyl methyl ketones and their derivatives obtained from biomass. These synthons are condensed into trimers via base-catalyzed aldol condensation and Michael addition. Hydrodeoxygenation of these products yields mixtures of C12 -C21 branched, cyclic alkanes. Using models for predicting the carbon number distribution obtained from a mixture of n-alkyl methyl ketones and for predicting the boiling point distribution of the final mixture of cyclic alkanes, we show that it is possible to define the mixture of synthons that will closely reproduce the distillation curve of traditional jet fuel. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre
2011-05-30
In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
GROMOS polarizable charge-on-spring models for liquid urea: COS/U and COS/U2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhixiong; Bachmann, Stephan J.; Gunsteren, Wilfred F. van, E-mail: wfvgn@igc.phys.chem.ethz.ch
2015-03-07
Two one-site polarizable urea models, COS/U and COS/U2, based on the charge-on-spring model are proposed. The models are parametrized against thermodynamic properties of urea-water mixtures in combination with the polarizable COS/G2 and COS/D2 models for liquid water, respectively, and have the same functional form of the inter-atomic interaction function and are based on the same parameter calibration procedure and type of experimental data as used to develop the GROMOS biomolecular force field. Thermodynamic, dielectric, and dynamic properties of urea-water mixtures simulated using the polarizable models are closer to experimental data than using the non-polarizable models. The COS/U and COS/U2 modelsmore » may be used in biomolecular simulations of protein denaturation.« less
Yan, Luchun; Liu, Jiemin; Qu, Chen; Gu, Xingye; Zhao, Xia
2015-01-28
In order to explore the odor interaction of binary odor mixtures, a series of odor intensity evaluation tests were performed using both individual components and binary mixtures of aldehydes. Based on the linear relation between the logarithm of odor activity value and odor intensity of individual substances, the relationship between concentrations of individual constituents and their joint odor intensity was investigated by employing a partial differential equation (PDE) model. The obtained results showed that the binary odor interaction was mainly influenced by the mixing ratio of two constituents, but not the concentration level of an odor sample. Besides, an extended PDE model was also proposed on the basis of the above experiments. Through a series of odor intensity matching tests for several different binary odor mixtures, the extended PDE model was proved effective at odor intensity prediction. Furthermore, odorants of the same chemical group and similar odor type exhibited similar characteristics in the binary odor interaction. The overall results suggested that the PDE model is a more interpretable way of demonstrating the odor interactions of binary odor mixtures.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
Qiu, Hao; Versieren, Liske; Rangel, Georgina Guzman; Smolders, Erik
2016-01-19
Soil contamination with copper (Cu) is often associated with zinc (Zn), and the biological response to such mixed contamination is complex. Here, we investigated Cu and Zn mixture toxicity to Hordeum vulgare in three different soils, the premise being that the observed interactions are mainly due to effects on bioavailability. The toxic effect of Cu and Zn mixtures on seedling root elongation was more than additive (i.e., synergism) in soils with high and medium cation-exchange capacity (CEC) but less than additive (antagonism) in a low-CEC soil. This was found when we expressed the dose as the conventional total soil concentration. In contrast, antagonism was found in all soils when we expressed the dose as free-ion activities in soil solution, indicating that there is metal-ion competition for binding to the plant roots. Neither a concentration addition nor an independent action model explained mixture effects, irrespective of the dose expressions. In contrast, a multimetal BLM model and a WHAM-Ftox model successfully explained the mixture effects across all soils and showed that bioavailability factors mainly explain the interactions in soils. The WHAM-Ftox model is a promising tool for the risk assessment of mixed-metal contamination in soils.
Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics
NASA Astrophysics Data System (ADS)
Zaharieva, Roussislava; Hanagud, Sathya
2009-06-01
Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.
An odor interaction model of binary odorant mixtures by a partial differential equation method.
Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong
2014-07-09
A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.
Cumulative effects of anti-androgenic chemical mixtures and ...
Kembra L. Howdeshell and L. Earl Gray, Jr.Toxicological studies of defined chemical mixtures assist human health risk assessment by characterizing the joint action of chemicals. This presentation will review the effects of anti-androgenic chemical mixtures on reproductive tract development in rats with a special focus on the reproductive toxicant phthalates. Observed mixture data are compared to mathematical mixture model predictions to determine how the individual chemicals in a mixture interact (e.g., response addition – probabilities of response for each individual chemical are added; dose-addition – the doses of each individual chemical at a given mixture dose are combined together based on the relative potency of the individual chemicals). Phthalate mixtures are observed to act in a dose-additive manner based on the relative potency of the individual phthalates to suppress fetal testosterone production. Similar dose-additive effects have been reported for mixtures of phthalates with anti-androgenic pesticides of differing mechanisms. Data from these phthalate experiments in rats can be used in conjunction with human biomonitoring data to determine individual hazard ratios. Furthermore, data from the toxicological studies can inform the analysis of human biomonitoring data on the association of detected chemicals and their metabolites with measured health outcomes. Data from phthalate experiments in rats can be used in conjunction with human biomonit
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
2017-04-01
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.
Analysis of Forest Foliage Using a Multivariate Mixture Model
NASA Technical Reports Server (NTRS)
Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.
1997-01-01
Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
Si, Guo-Ning; Chen, Lan; Li, Bao-Guo
2014-04-01
Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
A numerical model for boiling heat transfer coefficient of zeotropic mixtures
NASA Astrophysics Data System (ADS)
Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo
2017-12-01
Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
Challenges in cumulative risk assessment of anti-androgenic phthalate mixtures include a lack of data on all the individual phthalates and difficulty determining the biological relevance of reduction in fetal testosterone (T) on postnatal development. The objectives of the curren...
Premixed flame propagation in combustible particle cloud mixtures
NASA Technical Reports Server (NTRS)
Seshadri, K.; Yang, B.
1993-01-01
The structures of premixed flames propagating in combustible systems, containing uniformly distributed volatile fuel particles, in an oxidizing gas mixtures is analyzed. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi(u) is substantially larger than unity. A model is developed to explain these experimental observations. In the model it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. It is shown that the interplay of vaporization kinetics and oxidation process, can result in steady flame propagation in combustible mixtures where the value of phi(u) is substantially larger than unity. This prediction is in agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun
2011-09-01
We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Gainer, Amy; Cousins, Mark; Hogan, Natacha; Siciliano, Steven D
2018-05-05
Although petroleum hydrocarbons (PHCs) released to the environment typically occur as mixtures, PHC remediation guidelines often reflect individual substance toxicity. It is well documented that groups of aliphatic PHCs act via the same mechanism of action, nonpolar narcosis and, theoretically, concentration addition mixture toxicity principles apply. To assess this theory, ten standardized acute and chronic soil invertebrate toxicity tests on a range of organisms (Eisenia fetida, Lumbricus terrestris, Enchytraeus crypticus, Folsomia candida, Oppia nitens and Hypoaspis aculeifer) were conducted with a refined PHC binary mixture. Reference models for concentration addition and independent action were applied to the mixture toxicity data with consideration of synergism, antagonism and dose level toxicity. Both concentration addition and independent action, without further interactions, provided the best fit with observed response to the mixture. Individual fraction effective concentration values were predicted from optimized, fitted reference models. Concentration addition provided a better estimate than independent action of individual fraction effective concentrations based on comparison with available literature and species trends observed in toxic responses to the mixture. Interspecies differences in standardized laboratory soil invertebrate species responses to PHC contaminated soil was reflected in unique traits. Diets that included soil, large body size, permeable cuticle, low lipid content, lack of ability to molt and no maternal transfer were traits linked to a sensitive survival response to PHC contaminated soil in laboratory tests. Traits linked to sensitive reproduction response in organisms tested were long life spans with small clutch sizes. By deriving single fraction toxicity endpoints considerate of mixtures, we reduce resources and time required in conducting site specific risk assessments for the protection of soil organism's exposure pathway. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Silva, Carlos; Nunes, Bruno; Nogueira, António Ja; Gonçalves, Fernando; Pereira, Joana L
2016-11-01
Using the bivalve macrofouler Corbicula fluminea, the suitability of in vitro testing as a stepping stone towards the improvement of control methods based on chemical mixtures was addressed in this study. In vitro cholinesterase (ChE) activity inhibition following single exposure of C. fluminea tissue to four model chemicals (the organophosphates dimethoate and dichlorvos, copper and sodium dodecyl phosphate [SDS]) was first assessed. Consequently, mixtures of dimethoate with copper and dichlorvos with SDS were tested and modelled; mixtures with ChE revealed synergistic interactions for both chemical pairs. These synergic combinations were subsequently validated in vivo and the increased control potential of these selected combinations was verified, with gains of up to 50% in C. fluminea mortality relative to corresponding single chemical treatments. Such consistency supports the suitability of using time- and cost-effective surrogate testing platforms to assist the development of biofouling control strategies incorporating mixtures.
Theory for a gas composition sensor based on acoustic properties
NASA Technical Reports Server (NTRS)
Phillips, Scott; Dain, Yefim; Lueptow, Richard M.
2003-01-01
Sound travelling through a gas propagates at different speeds and its intensity attenuates to different degrees depending upon the composition of the gas. Theoretically, a real-time gaseous composition sensor could be based on measuring the sound speed and the acoustic attenuation. To this end, the speed of sound was modelled using standard relations, and the acoustic attenuation was modelled using the theory for vibrational relaxation of gas molecules. The concept for a gas composition sensor is demonstrated theoretically for nitrogen-methane-water and hydrogen-oxygen-water mixtures. For a three-component gas mixture, the measured sound speed and acoustic attenuation each define separate lines in the composition plane of two of the gases. The intersection of the two lines defines the gas composition. It should also be possible to use the concept for mixtures of more than three components, if the nature of the gas composition is known to some extent.
GMM-based speaker age and gender classification in Czech and Slovak
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich
2017-01-01
The paper describes an experiment with using the Gaussian mixture models (GMM) for automatic classification of the speaker age and gender. It analyses and compares the influence of different number of mixtures and different types of speech features used for GMM gender/age classification. Dependence of the computational complexity on the number of used mixtures is also analysed. Finally, the GMM classification accuracy is compared with the output of the conventional listening tests. The results of these objective and subjective evaluations are in correspondence.
Altenburger, Rolf; Scholze, Martin; Busch, Wibke; Escher, Beate I; Jakobs, Gianina; Krauss, Martin; Krüger, Janet; Neale, Peta A; Ait-Aissa, Selim; Almeida, Ana Catarina; Seiler, Thomas-Benjamin; Brion, François; Hilscherová, Klára; Hollert, Henner; Novák, Jiří; Schlichting, Rita; Serra, Hélène; Shao, Ying; Tindall, Andrew; Tolefsen, Knut-Erik; Umbuzeiro, Gisela; Williams, Tim D; Kortenkamp, Andreas
2018-05-01
Chemicals in the environment occur in mixtures rather than as individual entities. Environmental quality monitoring thus faces the challenge to comprehensively assess a multitude of contaminants and potential adverse effects. Effect-based methods have been suggested as complements to chemical analytical characterisation of complex pollution patterns. The regularly observed discrepancy between chemical and biological assessments of adverse effects due to contaminants in the field may be either due to unidentified contaminants or result from interactions of compounds in mixtures. Here, we present an interlaboratory study where individual compounds and their mixtures were investigated by extensive concentration-effect analysis using 19 different bioassays. The assay panel consisted of 5 whole organism assays measuring apical effects and 14 cell- and organism-based bioassays with more specific effect observations. Twelve organic water pollutants of diverse structure and unique known modes of action were studied individually and as mixtures mirroring exposure scenarios in freshwaters. We compared the observed mixture effects against component-based mixture effect predictions derived from additivity expectations (assumption of non-interaction). Most of the assays detected the mixture response of the active components as predicted even against a background of other inactive contaminants. When none of the mixture components showed any activity by themselves then the mixture also was without effects. The mixture effects observed using apical endpoints fell in the middle of a prediction window defined by the additivity predictions for concentration addition and independent action, reflecting well the diversity of the anticipated modes of action. In one case, an unexpectedly reduced solubility of one of the mixture components led to mixture responses that fell short of the predictions of both additivity mixture models. The majority of the specific cell- and organism-based endpoints produced mixture responses in agreement with the additivity expectation of concentration addition. Exceptionally, expected (additive) mixture response did not occur due to masking effects such as general toxicity from other compounds. Generally, deviations from an additivity expectation could be explained due to experimental factors, specific limitations of the effect endpoint or masking side effects such as cytotoxicity in in vitro assays. The majority of bioassays were able to quantitatively detect the predicted non-interactive, additive combined effect of the specifically bioactive compounds against a background of complex mixture of other chemicals in the sample. This supports the use of a combination of chemical and bioanalytical monitoring tools for the identification of chemicals that drive a specific mixture effect. Furthermore, we demonstrated that a panel of bioassays can provide a diverse profile of effect responses to a complex contaminated sample. This could be extended towards representing mixture adverse outcome pathways. Our findings support the ongoing development of bioanalytical tools for (i) compiling comprehensive effect-based batteries for water quality assessment, (ii) designing tailored surveillance methods to safeguard specific water uses, and (iii) devising strategies for effect-based diagnosis of complex contamination. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids
NASA Astrophysics Data System (ADS)
Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk
2018-03-01
The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
ERIC Educational Resources Information Center
Weiss, Brandi A.; Dardick, William
2016-01-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
Berntsen, Hanne Friis; Berg, Vidar; Thomsen, Cathrine; Ropstad, Erik; Zimmer, Karin Elisabeth
2017-01-01
Amongst the substances listed as persistent organic pollutants (POP) under the Stockholm Convention on Persistent Organic Pollutants (SCPOP) are chlorinated, brominated, and fluorinated compounds. Most experimental studies investigating effects of POP employ single compounds. Studies focusing on effects of POP mixtures are limited, and often conducted using extracts from collected specimens. Confounding effects of unmeasured substances in such extracts may bias the estimates of presumed causal relationships being examined. The aim of this investigation was to design a model of an environmentally relevant mixture of POP for use in experimental studies, containing 29 different chlorinated, brominated, and perfluorinated compounds. POP listed under the SCPOP and reported to occur at the highest levels in Scandinavian food, blood, or breast milk prior to 2012 were selected, and two different mixtures representing varying exposure scenarios constructed. The in vivo mixture contained POP concentrations based upon human estimated daily intakes (EDIs), whereas the in vitro mixture was based upon levels in human blood. In addition to total in vitro mixture, 6 submixtures containing the same concentration of chlorinated + brominated, chlorinated + perfluorinated, brominated + perfluorinated, or chlorinated, brominated or perfluorinated compounds only were constructed. Using submixtures enables investigating the effect of adding or removing one or more chemical groups. Concentrations of compounds included in feed and in vitro mixtures were verified by chemical analysis. It is suggested that this method may be utilized to construct realistic mixtures of environmental contaminants for toxicity studies based upon the relative levels of POP to which individuals are exposed.
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes
Brown, Colin D.; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-01-01
Abstract Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant‐protection products applied annually for 20 yr and at a watershed scale for a mixed land‐use scenario over 30 yr with 12 plant‐protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real‐world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario‐based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674–689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:29193235
Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes.
Holmes, Christopher M; Brown, Colin D; Hamer, Mick; Jones, Russell; Maltby, Lorraine; Posthuma, Leo; Silberhorn, Eric; Teeter, Jerold Scott; Warne, Michael St J; Weltje, Lennart
2018-03-01
Environmental risk assessment of chemical mixtures is challenging because of the multitude of possible combinations that may occur. Aquatic risk from chemical mixtures in an agricultural landscape was evaluated prospectively in 2 exposure scenario case studies: at field scale for a program of 13 plant-protection products applied annually for 20 yr and at a watershed scale for a mixed land-use scenario over 30 yr with 12 plant-protection products and 2 veterinary pharmaceuticals used for beef cattle. Risk quotients were calculated from regulatory exposure models with typical real-world use patterns and regulatory acceptable concentrations for individual chemicals. The results could differentiate situations when there was concern associated with single chemicals from those when concern was associated with a mixture (based on concentration addition) with no single chemical triggering concern. Potential mixture risk was identified on 0.02 to 7.07% of the total days modeled, depending on the scenario, the taxa, and whether considering acute or chronic risk. Taxa at risk were influenced by receiving water body characteristics along with chemical use profiles and associated properties. The present study demonstrates that a scenario-based approach can be used to determine whether mixtures of chemicals pose risks over and above any identified using existing approaches for single chemicals, how often and to what magnitude, and ultimately which mixtures (and dominant chemicals) cause greatest concern. Environ Toxicol Chem 2018;37:674-689. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Evidence for Dose-Additive Effects of Pyrethroids on Motor Activity in Rats
Wolansky, Marcelo J.; Gennings, Chris; DeVito, Michael J.; Crofton, Kevin M.
2009-01-01
Background Pyrethroids are neurotoxic insecticides used in a variety of indoor and outdoor applications. Previous research characterized the acute dose–effect functions for 11 pyrethroids administered orally in corn oil (1 mL/kg) based on assessment of motor activity. Objectives We used a mixture of these 11 pyrethroids and the same testing paradigm used in single-compound assays to test the hypothesis that cumulative neurotoxic effects of pyrethroid mixtures can be predicted using the default dose–addition theory. Methods Mixing ratios of the 11 pyrethroids in the tested mixture were based on the ED30 (effective dose that produces a 30% decrease in response) of the individual chemical (i.e., the mixture comprised equipotent amounts of each pyrethroid). The highest concentration of each individual chemical in the mixture was less than the threshold for inducing behavioral effects. Adult male rats received acute oral exposure to corn oil (control) or dilutions of the stock mixture solution. The mixture of 11 pyrethroids was administered either simultaneously (2 hr before testing) or after a sequence based on times of peak effect for the individual chemicals (4, 2, and 1 hr before testing). A threshold additivity model was fit to the single-chemical data to predict the theoretical dose–effect relationship for the mixture under the assumption of dose additivity. Results When subthreshold doses of individual chemicals were combined in the mixtures, we found significant dose-related decreases in motor activity. Further, we found no departure from the predicted dose-additive curve regardless of the mixture dosing protocol used. Conclusion In this article we present the first in vivo evidence on pyrethroid cumulative effects supporting the default assumption of dose addition. PMID:20019907
Evidence for dose-additive effects of pyrethroids on motor activity in rats.
Wolansky, Marcelo J; Gennings, Chris; DeVito, Michael J; Crofton, Kevin M
2009-10-01
Pyrethroids are neurotoxic insecticides used in a variety of indoor and outdoor applications. Previous research characterized the acute dose-effect functions for 11 pyrethroids administered orally in corn oil (1 mL/kg) based on assessment of motor activity. We used a mixture of these 11 pyrethroids and the same testing paradigm used in single-compound assays to test the hypothesis that cumulative neurotoxic effects of pyrethroid mixtures can be predicted using the default dose-addition theory. Mixing ratios of the 11 pyrethroids in the tested mixture were based on the ED30 (effective dose that produces a 30% decrease in response) of the individual chemical (i.e., the mixture comprised equipotent amounts of each pyrethroid). The highest concentration of each individual chemical in the mixture was less than the threshold for inducing behavioral effects. Adult male rats received acute oral exposure to corn oil (control) or dilutions of the stock mixture solution. The mixture of 11 pyrethroids was administered either simultaneously (2 hr before testing) or after a sequence based on times of peak effect for the individual chemicals (4, 2, and 1 hr before testing). A threshold additivity model was fit to the single-chemical data to predict the theoretical dose-effect relationship for the mixture under the assumption of dose additivity. When subthreshold doses of individual chemicals were combined in the mixtures, we found significant dose-related decreases in motor activity. Further, we found no departure from the predicted dose-additive curve regardless of the mixture dosing protocol used. In this article we present the first in vivo evidence on pyrethroid cumulative effects supporting the default assumption of dose addition.
Constituent bioconcentration in rainbow trout exposed to a complex chemical mixture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linder, G.; Bergman, H.L.; Meyer, J.S.
1984-09-01
Classically, aquatic contaminant fate models predicting a chemical's bioconcentration factor (BCF) are based upon single-compound derived models, yet such BCF predictions may deviate from observed BCFs when physicochemical interactions or biological responses to complex chemical mixture exposures are not adequately considered in the predictive model. Rainbow trout were exposed to oil-shale retort waters. Such a study was designed to model the potential biological effects precluded by exposure to complex chemical mixtures such as solid waste leachates, agricultural runoff, and industrial process waste waters. Chromatographic analysis of aqueous and nonaqueous liquid-liquid reservoir components yielded differences in mixed extraction solvent HPLC profilesmore » of whole fish exposed for 1 and 3 weeks to the highest dilution of the complex chemical mixture when compared to their corresponding control, yet subsequent whole fish extractions at 6, 9, 12, and 15 weeks into exposure demonstrated no qualitative differences between control and exposed fish. Liver extractions and deproteinized bile samples from exposed fish were qualitatively different than their corresponding controls. These findings support the projected NOEC of 0.0045% dilution, even though the differences in bioconcentration profiles suggest hazard assessment strategies may be useful in evaluating environmental fate processes associated with complex chemical mixtures. 12 references, 4 figures, 2 tables.« less
Community detection for networks with unipartite and bipartite structure
NASA Astrophysics Data System (ADS)
Chang, Chang; Tang, Chao
2014-09-01
Finding community structures in networks is important in network science, technology, and applications. To date, most algorithms that aim to find community structures only focus either on unipartite or bipartite networks. A unipartite network consists of one set of nodes and a bipartite network consists of two nonoverlapping sets of nodes with only links joining the nodes in different sets. However, a third type of network exists, defined here as the mixture network. Just like a bipartite network, a mixture network also consists of two sets of nodes, but some nodes may simultaneously belong to two sets, which breaks the nonoverlapping restriction of a bipartite network. The mixture network can be considered as a general case, with unipartite and bipartite networks viewed as its limiting cases. A mixture network can represent not only all the unipartite and bipartite networks, but also a wide range of real-world networks that cannot be properly represented as either unipartite or bipartite networks in fields such as biology and social science. Based on this observation, we first propose a probabilistic model that can find modules in unipartite, bipartite, and mixture networks in a unified framework based on the link community model for a unipartite undirected network [B Ball et al (2011 Phys. Rev. E 84 036103)]. We test our algorithm on synthetic networks (both overlapping and nonoverlapping communities) and apply it to two real-world networks: a southern women bipartite network and a human transcriptional regulatory mixture network. The results suggest that our model performs well for all three types of networks, is competitive with other algorithms for unipartite or bipartite networks, and is applicable to real-world networks.
Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.
Optimization of air plasma reconversion of UF6 to UO2 based on thermodynamic calculations
NASA Astrophysics Data System (ADS)
Tundeshev, Nikolay; Karengin, Alexander; Shamanin, Igor
2018-03-01
The possibility of plasma-chemical conversion of depleted uranium-235 hexafluoride (DUHF) in air plasma in the form of gas-air mixtures with hydrogen is considered in the paper. Calculation of burning parameters of gas-air mixtures is carried out and the compositions of mixtures obtained via energy-efficient conversion of DUHF in air plasma are determined. With the help of plasma-chemical conversion, thermodynamic modeling optimal composition of UF6-H2-Air mixtures and its burning parameters, the modes for production of uranium dioxide in the condensed phase are determined. The results of the conducted researches can be used for creation of technology for plasma-chemical conversion of DUHF in the form of air-gas mixtures with hydrogen.
Notre Dame Geothermal Ionic Liquids Research: Ionic Liquids for Utilization of Geothermal Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brennecke, Joan F.
The goal of this project was to develop ionic liquids for two geothermal energy related applications. The first goal was to design ionic liquids as high temperature heat transfer fluids. We identified appropriate compounds based on both experiments and molecular simulations. We synthesized the new ILs, and measured their thermal stability, measured storage density, viscosity, and thermal conductivity. We found that the most promising compounds for this application are aminopyridinium bis(trifluoromethylsulfonyl)imide based ILs. We also performed some measurements of thermal stability of IL mixtures and used molecular simulations to better understand the thermal conductivity of nanofluids (i.e., mixtures of ILsmore » and nanoparticles). We found that the mixtures do not follow ideal mixture theories and that the addition of nanoparticles to ILs may well have a beneficial influence on the thermal and transport properties of IL-based heat transfer fluids. The second goal was to use ionic liquids in geothermally driven absorption refrigeration systems. We performed copious thermodynamic measurements and modeling of ionic liquid/water systems, including modeling of the absorption refrigeration systems and the resulting coefficients of performance. We explored some IL/organic solvent mixtures as candidates for this application, both with experimentation and molecular simulations. We found that the COPs of all of the IL/water systems were higher than the conventional system – LiBr/H2O. Thus, IL/water systems appear very attractive for absorption refrigeration applications.« less
Effect of stirring on the safety of flammable liquid mixtures.
Liaw, Horng-Jang; Gerbaud, Vincent; Chen, Chan-Cheng; Shu, Chi-Min
2010-05-15
Flash point is the most important variable employed to characterize fire and explosion hazard of liquids. The models developed for predicting the flash point of partially miscible mixtures in the literature to date are all based on the assumption of liquid-liquid equilibrium. In real-world environments, however, the liquid-liquid equilibrium assumption does not always hold, such as the collection or accumulation of waste solvents without stirring, where complete stirring for a period of time is usually used to ensure the liquid phases being in equilibrium. This study investigated the effect of stirring on the flash-point behavior of binary partially miscible mixtures. Two series of partially miscible binary mixtures were employed to elucidate the effect of stirring. The first series was aqueous-organic mixtures, including water+1-butanol, water+2-butanol, water+isobutanol, water+1-pentanol, and water+octane; the second series was the mixtures of two flammable solvents, which included methanol+decane, methanol+2,2,4-trimethylpentane, and methanol+octane. Results reveal that for binary aqueous-organic solutions the flash-point values of unstirred mixtures were located between those of the completely stirred mixtures and those of the flammable component. Therefore, risk assessment could be done based on the flammable component flash-point value. However, for the assurance of safety, it is suggested to completely stir those mixtures before handling to reduce the risk. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
A novel method of language modeling for automatic captioning in TC video teleconferencing.
Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura
2007-05-01
We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina
2017-07-01
The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Studies of defined mixtures of carcinogenic polycyclic aromatic hydrocarbons (PAH) have shown three major categories of interactions: antagonism, synergism, and additivity depending on the biological model, tissue, route of exposure, and specific PAH. To understand the bases of t...
Turbulent Burning Velocities of Two-Component Fuel Mixtures of Methane, Propane and Hydrogen
NASA Astrophysics Data System (ADS)
Kido, Hiroyuki; Nakahara, Masaya; Hashimoto, Jun; Barat, Dilmurat
In order to clarify the turbulent burning velocity of multi-component fuel mixtures, both lean and rich two-component fuel mixtures, in which methane, propane and hydrogen were used as fuels, were prepared while maintaining the laminar burning velocity approximately constant. A distinct difference in the measured turbulent burning velocity at the same turbulence intensity is observed for two-component fuel mixtures having different addition rates of fuel, even the laminar burning velocities are approximately the same. The burning velocities of lean mixtures change almost constantly as the rate of addition changes, whereas the burning velocities of the rich mixtures show no such tendency. This trend can be explained qualitatively based on the mean local burning velocity, which is estimated by taking into account the preferential diffusion effect for each fuel component. In addition, a model of turbulent burning velocity proposed for single-component fuel mixtures may be applied to two-component fuel mixtures by considering the estimated mean local burning velocity of each fuel.
Marciano, Michael A; Adelman, Jonathan D
2017-03-01
The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nascimento, Luis Alberto Herrmann do
This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.
NASA Astrophysics Data System (ADS)
Pascaud, J. M.; Brossard, J.; Lombard, J. M.
1999-09-01
The aim of this work consists in presenting a simple modelling (the molecular collision theory), easily usable in an industrial environment in order to predict the evolution of thermodynamical characteristics of the combustion of two-phase mixtures in a closed or a vented vessel. Basic characteristics of the modelling have been developed for ignition and combustion of propulsive powders and adapted with appropriate parameters linked to simplified kinetics. A simple representation of the combustion phenomena based on energy transfers and the action of specific molecules is presented. The model is generalized to various mixtures such as dust suspensions, liquid fuel drops and hybrid mixtures composed of dust and a gaseous supply such as methane or propane in the general case of vented explosions. The pressure venting due to the vent breaking is calculated from thermodynamical characteristics given by the model and taking into account, the mass rate of discharge of the different products deduced from the standard orifice equations. The application conditions determine the fuel ratio of the used mixtures, the nature of the chemical kinetics and the calculation of a universal set of parameters. The model allows to study the influence of the fuel concentration and the supply of gaseous additives, the influence of the vessel volume (2400ell leq V_bleq 250 000ell) and the influence of the venting pressure or the vent area. The first results have been compared with various experimental works available for two phase mixtures and indicate quite correct predictions.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
Traudt, Elizabeth M; Ranville, James F; Meyer, Joseph S
2017-04-18
Multiple metals are usually present in surface waters, sometimes leading to toxicity that currently is difficult to predict due to potentially non-additive mixture toxicity. Previous toxicity tests with Daphnia magna exposed to binary mixtures of Ni combined with Cd, Cu, or Zn demonstrated that Ni and Zn strongly protect against Cd toxicity, but Cu-Ni toxicity is more than additive, and Ni-Zn toxicity is slightly less than additive. To consider multiple metal-metal interactions, we exposed D. magna neonates to Cd, Cu, Ni, or Zn alone and in ternary Cd-Cu-Ni and Cd-Ni-Zn combinations in standard 48 h lethality tests. In these ternary mixtures, two metals were held constant, while the third metal was varied through a series that ranged from nonlethal to lethal concentrations. In Cd-Cu-Ni mixtures, the toxicity was less than additive, additive, or more than additive, depending on the concentration (or ion activity) of the varied metal and the additivity model (concentration-addition or independent-action) used to predict toxicity. In Cd-Ni-Zn mixtures, the toxicity was less than additive or approximately additive, depending on the concentration (or ion activity) of the varied metal but independent of the additivity model. These results demonstrate that complex interactions of potentially competing toxicity-controlling mechanisms can occur in ternary-metal mixtures but might be predicted by mechanistic bioavailability-based toxicity models.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Carbon tetrachloride (CC4) and trichloroethylene (TCE) are hepatotoxic volatile organic compounds (VOCs) and environmental contaminants. Previous physiologically based pharmacokinetic (PBPK) models describe the kinetics ofindividual chemical disposition and metabolic clearance fo...
BiomeNet: A Bayesian Model for Inference of Metabolic Divergence among Microbial Communities
Chipman, Hugh; Gu, Hong; Bielawski, Joseph P.
2014-01-01
Metagenomics yields enormous numbers of microbial sequences that can be assigned a metabolic function. Using such data to infer community-level metabolic divergence is hindered by the lack of a suitable statistical framework. Here, we describe a novel hierarchical Bayesian model, called BiomeNet (Bayesian inference of metabolic networks), for inferring differential prevalence of metabolic subnetworks among microbial communities. To infer the structure of community-level metabolic interactions, BiomeNet applies a mixed-membership modelling framework to enzyme abundance information. The basic idea is that the mixture components of the model (metabolic reactions, subnetworks, and networks) are shared across all groups (microbiome samples), but the mixture proportions vary from group to group. Through this framework, the model can capture nested structures within the data. BiomeNet is unique in modeling each metagenome sample as a mixture of complex metabolic systems (metabosystems). The metabosystems are composed of mixtures of tightly connected metabolic subnetworks. BiomeNet differs from other unsupervised methods by allowing researchers to discriminate groups of samples through the metabolic patterns it discovers in the data, and by providing a framework for interpreting them. We describe a collapsed Gibbs sampler for inference of the mixture weights under BiomeNet, and we use simulation to validate the inference algorithm. Application of BiomeNet to human gut metagenomes revealed a metabosystem with greater prevalence among inflammatory bowel disease (IBD) patients. Based on the discriminatory subnetworks for this metabosystem, we inferred that the community is likely to be closely associated with the human gut epithelium, resistant to dietary interventions, and interfere with human uptake of an antioxidant connected to IBD. Because this metabosystem has a greater capacity to exploit host-associated glycans, we speculate that IBD-associated communities might arise from opportunist growth of bacteria that can circumvent the host's nutrient-based mechanism for bacterial partner selection. PMID:25412107
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
Initiation and structures of gaseous detonation
NASA Astrophysics Data System (ADS)
Vasil'ev, A. A.; Vasiliev, V. A.
2018-03-01
The analysis of the initiation of a detonation wave (DW) and the emergence of a multi-front structure of the DW-front are presented. It is shown that the structure of the DW arises spontaneously at the stage of a strong overdriven of the wave. The hypothesis of the gradual enhancement of small perturbations on an initially smooth initiating blast wave, traditionally used in the numerical simulation of multi-front detonation, does not agree with the experimental data. The instability of the DW is due to the chemical energy release of the combustible mixture Q. A technique for determining the Q-value of mixture was proposed, based on reconstruction of the trajectory of the expanding wave from the position of the strong explosion model. The wave trajectory at the critical initiation of a multifront detonation in a combustible mixture is compared with the trajectory of an explosive wave from the same initiator in an inert mixture whose gas-dynamic parameters are equivalent to the parameters of the combustible mixture. The energy release of a mixture is defined as the difference in the joint energy release of the initiator and the fuel mixture during the critical initiation and energy release of the initiator when the blast wave is excited in an inert mixture. Observable deviations of the experimental profile of Q from existing model representations were found.
NASA Astrophysics Data System (ADS)
Baadj, S.; Harrache, Z.; Belasri, A.
2013-12-01
The aim of this work is to highlight, through numerical modeling, the chemical and the electrical characteristics of xenon chloride mixture in XeCl* (308 nm) excimer lamp created by a dielectric barrier discharge. A temporal model, based on the Xe/Cl2 mixture chemistry, the circuit and the Boltzmann equations, is constructed. The effects of operating voltage, Cl2 percentage in the Xe/Cl2 gas mixture, dielectric capacitance, as well as gas pressure on the 308-nm photon generation, under typical experimental operating conditions, have been investigated and discussed. The importance of charged and excited species, including the major electronic and ionic processes, is also demonstrated. The present calculations show clearly that the model predicts the optimal operating conditions and describes the electrical and chemical properties of the XeCl* exciplex lamp.
NASA Astrophysics Data System (ADS)
Choiri, S.; Ainurofiq, A.
2018-03-01
Drug release from a montmorillonite (MMT) matrix is a complex mechanism controlled by swelling mechanism of MMT and an interaction of drug and MMT. The aim of this research was to explain a suitable model of the drug release mechanism from MMT and its binary mixture with a hydrophilic polymer in the controlled release formulation based on a compartmental modelling approach. Theophylline was used as a drug model and incorporated into MMT and a binary mixture with hydroxyl propyl methyl cellulose (HPMC) as a hydrophilic polymer, by a kneading method. The dissolution test was performed and the modelling of drug release was assisted by a WinSAAM software. A 2 model was purposed based on the swelling capability and basal spacing of MMT compartments. The model evaluation was carried out to goodness of fit and statistical parameters and models were validated by a cross-validation technique. The drug release from MMT matrix regulated by a burst release mechanism of unloaded drug, swelling ability, basal spacing of MMT compartment, and equilibrium between basal spacing and swelling compartments. Furthermore, the addition of HPMC in MMT system altered the presence of swelling compartment and equilibrium between swelling and basal spacing compartment systems. In addition, a hydrophilic polymer reduced the burst release mechanism of unloaded drug.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Review: Modelling chemical kinetics and convective heating in giant planet entries
NASA Astrophysics Data System (ADS)
Reynier, Philippe; D'Ammando, Giuliano; Bruno, Domenico
2018-01-01
A review of the existing chemical kinetics models for H2 / He mixtures and related transport and thermodynamic properties is presented as a pre-requisite towards the development of innovative models based on the state-to-state approach. A survey of the available results obtained during the mission preparation and post-flight analyses of the Galileo mission has been undertaken and a computational matrix has been derived. Different chemical kinetics schemes for hydrogen/helium mixtures have been applied to numerical simulations of the selected points along the entry trajectory. First, a reacting scheme, based on literature data, has been set up for computing the flow-field around the probe at high altitude and comparisons with existing numerical predictions are performed. Then, a macroscopic model derived from a state-to-state model has been constructed and incorporated into a CFD code. Comparisons with existing numerical results from the literature have been performed as well as cross-check comparisons between the predictions provided by the different models in order to evaluate the potential of innovative chemical kinetics models based on the state-to-state approach.
Rodea-Palomares, Ismael; González-Pleiter, Miguel; Martín-Betancor, Keila; Rosal, Roberto; Fernández-Piñas, Francisca
2015-01-01
Understanding the effects of exposure to chemical mixtures is a common goal of pharmacology and ecotoxicology. In risk assessment-oriented ecotoxicology, defining the scope of application of additivity models has received utmost attention in the last 20 years, since they potentially allow one to predict the effect of any chemical mixture relying on individual chemical information only. The gold standard for additivity in ecotoxicology has demonstrated to be Loewe additivity which originated the so-called Concentration Addition (CA) additivity model. In pharmacology, the search for interactions or deviations from additivity (synergism and antagonism) has similarly captured the attention of researchers over the last 20 years and has resulted in the definition and application of the Combination Index (CI) Theorem. CI is based on Loewe additivity, but focused on the identification and quantification of synergism and antagonism. Despite additive models demonstrating a surprisingly good predictive power in chemical mixture risk assessment, concerns still exist due to the occurrence of unpredictable synergism or antagonism in certain experimental situations. In the present work, we summarize the parallel history of development of CA, IA, and CI models. We also summarize the applicability of these concepts in ecotoxicology and how their information may be integrated, as well as the possibility of prediction of synergism. Inside the box, the main question remaining is whether it is worthy to consider departures from additivity in mixture risk assessment and how to predict interactions among certain mixture components. Outside the box, the main question is whether the results observed under the experimental constraints imposed by fractional approaches are a de fide reflection of what it would be expected from chemical mixtures in real world circumstances. PMID:29051468
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2017-01-01
A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. The difficulty addressed here is the fact that, because of metamerism, we cannot know with certainty the spectrum that produced a particular color solely on the basis of sensory data. Knowledge of the spectrum is not required to compute additive mixture of colors, but is critical for subtractive (multiplicative) mixture. Therefore, we cannot predict with certainty the multiplicative interactions between colors based solely on sensory data. There are two potential applications of a color algebra: first, to aid modeling phenomena of human visual perception, such as color constancy and transparency; and, second, to provide better models of the interactions of lights and surfaces for computer graphics rendering.
Condensation of binary mixtures on horizontal tubes
NASA Astrophysics Data System (ADS)
Büchner, A.; Reif, A.; Rehfeldt, S.; Klein, H.
2017-12-01
The two most common models to describe the condensation of binary mixtures are the equilibrium model by Silver (Trans Inst Chem Eng 25:30-42, 1947) and the film model by Colburn and Drew (Transactions of the American Institute of Chemical Engineers 33:197-215, 1937), which is stated by Webb et al. (Int J Heat Mass Transf 39:3147-3156, 1996) as more accurate. The film model describes the outer heat transfer coefficient by subdividing it into two separate resistances against the heat transfer. The resistance of the liquid condensate film on the tube can be calculated with equations for the condensation of pure substances for the analogous flow pattern and geometry using the property data of the mixture. The resistance in the gas phase can be described by a thermodynamic parameter Z and the single phase heat transfer coefficient α G . In this work measurements for condensation of the binary mixtures n-pentane/iso-octane and iso-propanol/water on horizontal tubes for free convection are carried out. The obtained results are compared with the film model by Colburn and Drew (Transactions of the American Institute of Chemical Engineers 33:197-215, 1937). The comparison shows a rather big deviation between the theoretical model and the experimental results. To improve the prediction quality an own model based on dimensionless numbers is proposed, which describes the experimental results of this work significantly better than the film model.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
ERIC Educational Resources Information Center
Schuster, Mariah L.; Peterson, Karl P.; Stoffregen, Stacey A.
2018-01-01
This two-period undergraduate laboratory experiment involves the synthesis of a mixture of isomeric unknowns, isolation of the mixture by means of distillation, and characterization of the two products primarily by NMR spectroscopy (1D and 2D) supported with IR spectroscopy and GC-MS techniques. Subsequent calculation and examination of the…
Quantitative analysis of multi-component gas mixture based on AOTF-NIR spectroscopy
NASA Astrophysics Data System (ADS)
Hao, Huimin; Zhang, Yong; Liu, Junhua
2007-12-01
Near Infrared (NIR) spectroscopy analysis technology has attracted many eyes and has wide application in many domains in recent years because of its remarkable advantages. But the NIR spectrometer can only be used for liquid and solid analysis by now. In this paper, a new quantitative analysis method of gas mixture by using new generation NIR spectrometer is explored. To collect the NIR spectra of gas mixtures, a vacuumable gas cell was designed and assembled to Luminar 5030-731 Acousto-Optic Tunable Filter (AOTF)-NIR spectrometer. Standard gas samples of methane (CH 4), ethane (C IIH 6) and propane (C 3H 8) are diluted with super pure nitrogen via precision volumetric gas flow controllers to obtain gas mixture samples of different concentrations dynamically. The gas mixtures were injected into the gas cell and the spectra of wavelength between 1100nm-2300nm were collected. The feature components extracted from gas mixture spectra by using Partial Least Squares (PLS) were used as the inputs of the Support Vector Regress Machine (SVR) to establish the quantitative analysis model. The effectiveness of the model is tested by the samples of predicting set. The prediction Root Mean Square Error (RMSE) of CH 4, C IIH 6 and C 3H 8 is respectively 1.27%, 0.89%, and 1.20% when the concentrations of component gas are over 0.5%. It shows that the AOTF-NIR spectrometer with gas cell can be used for gas mixture analysis. PLS combining with SVR has a good performance in NIR spectroscopy analysis. This paper provides the bases for extending the application of NIR spectroscopy analysis to gas detection.
Wang, Cheng; He, Lidong; Li, Da-Wei; Bruschweiler-Li, Lei; Marshall, Alan G; Brüschweiler, Rafael
2017-10-06
Metabolite identification in metabolomics samples is a key step that critically impacts downstream analysis. We recently introduced the SUMMIT NMR/mass spectrometry (MS) hybrid approach for the identification of the molecular structure of unknown metabolites based on the combination of NMR, MS, and combinatorial cheminformatics. Here, we demonstrate the feasibility of the approach for an untargeted analysis of both a model mixture and E. coli cell lysate based on 2D/3D NMR experiments in combination with Fourier transform ion cyclotron resonance MS and MS/MS data. For 19 of the 25 model metabolites, SUMMIT yielded complete structures that matched those in the mixture independent of database information. Of those, seven top-ranked structures matched those in the mixture, and four of those were further validated by positive ion MS/MS. For five metabolites, not part of the 19 metabolites, correct molecular structural motifs could be identified. For E. coli, SUMMIT MS/NMR identified 20 previously known metabolites with three or more 1 H spins independent of database information. Moreover, for 15 unknown metabolites, molecular structural fragments were determined consistent with their spin systems and chemical shifts. By providing structural information for entire metabolites or molecular fragments, SUMMIT MS/NMR greatly assists the targeted or untargeted analysis of complex mixtures of unknown compounds.
A Just-in-Time Learning based Monitoring and Classification Method for Hyper/Hypocalcemia Diagnosis.
Peng, Xin; Tang, Yang; He, Wangli; Du, Wenli; Qian, Feng
2017-01-20
This study focuses on the classification and pathological status monitoring of hyper/hypo-calcemia in the calcium regulatory system. By utilizing the Independent Component Analysis (ICA) mixture model, samples from healthy patients are collected, diagnosed, and subsequently classified according to their underlying behaviors, characteristics, and mechanisms. Then, a Just-in-Time Learning (JITL) has been employed in order to estimate the diseased status dynamically. In terms of JITL, for the purpose of the construction of an appropriate similarity index to identify relevant datasets, a novel similarity index based on the ICA mixture model is proposed in this paper to improve online model quality. The validity and effectiveness of the proposed approach have been demonstrated by applying it to the calcium regulatory system under various hypocalcemic and hypercalcemic diseased conditions.
Mixture experiment methods in the development and optimization of microemulsion formulations.
Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Statistical Mechanical Theory of Coupled Slow Dynamics in Glassy Polymer-Molecule Mixtures
NASA Astrophysics Data System (ADS)
Zhang, Rui; Schweizer, Kenneth
The microscopic Elastically Collective Nonlinear Langevin Equation theory of activated relaxation in one-component supercooled liquids and glasses is generalized to polymer-molecule mixtures. The key idea is to account for dynamic coupling between molecule and polymer segment motion. For describing the molecule hopping event, a temporal casuality condition is formulated to self-consistently determine a dimensionless degree of matrix distortion relative to the molecule jump distance based on the concept of coupled dynamic free energies. Implementation for real materials employs an established Kuhn sphere model of the polymer liquid and a quantitative mapping to a hard particle reference system guided by the experimental equation-of-state. The theory makes predictions for the mixture dynamic shear modulus, activated relaxation time and diffusivity of both species, and mixture glass transition temperature as a function of molecule-Kuhn segment size ratio and attraction strength, composition and temperature. Model calculations illustrate the dynamical behavior in three distinct mixture regimes (fully miscible, bridging, clustering) controlled by the molecule-polymer interaction or chi-parameter. Applications to specific experimental systems will be discussed.
Microsiemens or Milligrams: Measures of Ionic Mixtures ...
In December of 2016, EPA released the Draft Field-Based Methods for Developing Aquatic Life Criteria for Specific Conductivity for public comment. Once final, states and authorized tribes may use these methods to derive field-based ecoregional ambient Aquatic Life Ambient Water Quality Criteria (AWQC) for specific conductivity (SC) in flowing waters. The methods provide flexible approaches for developing science-based SC criteria that reflect ecoregional or state specific factors. The concentration of a dissolved salt mixture can be measured in a number of ways including measurement of total dissolved solids, freezing point depression, refractive index, density, or the sum of the concentrations of individually measured ions. For the draft method, SC was selected as the measure because SC is a measure of all ions in the mixture; the measurement technology is fast, inexpensive, and accurate, and it measures only dissolved ions. When developing water quality criteria for major ions, some stakeholders may prefer to identify the ionic constituents as a measure of exposure instead of SC. A field-based method was used to derive example chronic and acute water quality criteria for SC and two anions a common mixture of ions (bicarbonate plus sulfate, [HCO3−] + [SO42−] in mg/L) that represent common mixtures in streams. These two anions are sufficient to model the ion mixture and SC (R2 = 0.94). Using [HCO3−] + [SO42−] does not imply that these two anions are the
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
NASA Astrophysics Data System (ADS)
Robati, Masoud
This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.
A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures
NASA Astrophysics Data System (ADS)
Youssef, Rasha M.; Maher, Hadir M.
2008-10-01
A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.
A mixture toxicity approach to predict the toxicity of Ag decorated ZnO nanomaterials.
Azevedo, S L; Holz, T; Rodrigues, J; Monteiro, T; Costa, F M; Soares, A M V M; Loureiro, S
2017-02-01
Nanotechnology is a rising field and nanomaterials can now be found in a vast variety of products with different chemical compositions, sizes and shapes. New nanostructures combining different nanomaterials are being developed due to their enhancing characteristics when compared to nanomaterials alone. In the present study, the toxicity of a nanostructure composed by a ZnO nanomaterial with Ag nanomaterials on its surface (designated as ZnO/Ag nanostructure) was assessed using the model-organism Daphnia magna and its toxicity predicted based on the toxicity of the single components (Zn and Ag). For that ZnO and Ag nanomaterials as single components, along with its mixture prepared in the laboratory, were compared in terms of toxicity to ZnO/Ag nanostructures. Toxicity was assessed by immobilization and reproduction tests. A mixture toxicity approach was carried out using as starting point the conceptual model of Concentration Addition. The laboratory mixture of both nanomaterials showed that toxicity was dependent on the doses of ZnO and Ag used (immobilization) or presented a synergistic pattern (reproduction). The ZnO/Ag nanostructure toxicity prediction, based on the percentage of individual components, showed an increase in toxicity when compared to the expected (immobilization) and dependent on the concentration used (reproduction). This study demonstrates that the toxicity of the prepared mixture of ZnO and Ag and of the ZnO/Ag nanostructure cannot be predicted based on the toxicity of their components, highlighting the importance of taking into account the interaction between nanomaterials when assessing hazard and risk. Copyright © 2016 Elsevier B.V. All rights reserved.
Redman, Aaron D; Parkerton, Thomas F; Butler, Josh David; Letinski, Daniel J; Frank, Richard A; Hewitt, L Mark; Bartlett, Adrienne J; Gillis, Patricia Leigh; Marentette, Julie R; Parrott, Joanne L; Hughes, Sarah A; Guest, Rodney; Bekele, Asfaw; Zhang, Kun; Morandi, Garrett; Wiseman, Steve B; Giesy, John P
2018-06-14
Oil sand operations in Alberta, Canada will eventually include returning treated process-affected waters to the environment. Organic constituents in oil sand process-affected water (OSPW) represent complex mixtures of nonionic and ionic (e.g. naphthenic acids) compounds, and compositions can vary spatially and temporally, which has impeded development of water quality benchmarks. To address this challenge, it was hypothesized that solid phase microextraction fibers coated with polydimethylsiloxane (PDMS) could be used as a biomimetic extraction (BE) to measure bioavailable organics in OSPW. Organic constituents of OSPW were assumed to contribute additively to toxicity, and partitioning to PDMS was assumed to be predictive of accumulation in target lipids, which were the presumed site of action. This method was tested using toxicity data for individual model compounds, defined mixtures, and organic mixtures extracted from OSPW. Toxicity was correlated with BE data, which supports the use of this method in hazard assessments of acute lethality to aquatic organisms. A species sensitivity distribution (SSD), based on target lipid model and BE values, was similar to SSDs based on residues in tissues for both nonionic and ionic organics. BE was shown to be an analytical tool that accounts for bioaccumulation of organic compound mixtures from which toxicity can be predicted, with the potential to aid in the development of water quality guidelines.
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Rider, Cynthia V.; Furr, Johnathan R.; Wilson, Vickie S.; Gray, L. Earl
2010-01-01
Although risk assessments are typically conducted on a chemical-by-chemical basis, the 1996 Food Quality Protection Act required the US Environmental Protection Agency to consider cumulative risk of chemicals that act via a common mechanism of toxicity. To this end, we are conducting studies with mixtures of chemicals to elucidate mechanisms of joint action at the systemic level with the end goal of providing a framework for assessing the cumulative effects of reproductive toxicants. Previous mixture studies conducted with antiandrogenic chemicals are reviewed briefly and two new studies are described in detail. In all binary mixture studies, rats were dosed during pregnancy with chemicals, singly or in pairs at dosage levels equivalent to approximately one half of the ED50 for hypospadias or epididymal agenesis. The binary mixtures included: androgen receptor (AR) antagonists (vinclozolin plus procymidone), phthalate esters (DBP plus BBP and DEHP plus DBP), a phthalate ester plus an AR antagonist (DBP plus procymidone), a mixed mechanism androgen signaling disruptor (linuron) plus BBP, and two chemicals which disrupt epididymal differentiation through entirely different toxicity pathways: DBP (AR pathway) plus 2,3,7,8 TCDD (AhR pathway). We also conducted multi-component mixture studies combining several “antiandrogens” together. In the first study, seven chemicals (four pesticides and three phthalates) that elicit antiandrogenic effects at two different sites in the androgen signaling pathway (i.e. AR antagonist or inhibition of androgen synthesis) were combined. In the second study, three additional phthalates were added to make a ten chemical mixture. In both the binary mixture studies and the multi-component mixture studies, chemicals that targeted male reproductive tract development displayed cumulative effects that exceeded predictions based upon a response addition model and most often were in accordance with predictions based upon dose addition models. In summary, our results indicate that compounds that act by disparate mechanisms of toxicity to disrupt the dynamic interactions among the interconnected signaling pathways in differentiating tissues produce cumulative dose-additive effects, regardless of the mechanism or mode of action of the individual mixture component. PMID:20487044
Sonka, Milan; Abramoff, Michael D.
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR. PMID:24222760
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Root, Katharina; Wittwer, Yves; Barylyuk, Konstantin; Anders, Ulrike; Zenobi, Renato
2017-09-01
Native ESI-MS is increasingly used for quantitative analysis of biomolecular interactions. In such analyses, peak intensity ratios measured in mass spectra are treated as abundance ratios of the respective molecules in solution. While signal intensities of similar-size analytes, such as a protein and its complex with a small molecule, can be directly compared, significant distortions of the peak ratio due to unequal signal response of analytes impede the application of this approach for large oligomeric biomolecular complexes. We use a model system based on concatenated maltose binding protein units (MBPn, n = 1, 2, 3) to systematically study the behavior of protein mixtures in ESI-MS. The MBP concatamers differ from each other only by their mass while the chemical composition and other properties remain identical. We used native ESI-MS to analyze model mixtures of MBP oligomers, including equimolar mixtures of two proteins, as well as binary mixtures containing different fractions of the individual components. Pronounced deviation from a linear dependence of the signal intensity with concentration was observed for all binary mixtures investigated. While equimolar mixtures showed linear signal dependence at low concentrations, distinct ion suppression was observed above 20 μM. We systematically studied factors that are most often used in the literature to explain the origin of suppression effects. Implications of this effect for quantifying protein-protein binding affinity by native ESI-MS are discussed in general and demonstrated for an example of an anti-MBP antibody with its ligand, MBP. Graphical Abstract ᅟ.
The physical model for research of behavior of grouting mixtures
NASA Astrophysics Data System (ADS)
Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav
2016-06-01
The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baadj, S.; Harrache, Z., E-mail: zharrache@yahoo.com; Belasri, A.
2013-12-15
The aim of this work is to highlight, through numerical modeling, the chemical and the electrical characteristics of xenon chloride mixture in XeCl* (308 nm) excimer lamp created by a dielectric barrier discharge. A temporal model, based on the Xe/Cl{sub 2} mixture chemistry, the circuit and the Boltzmann equations, is constructed. The effects of operating voltage, Cl{sub 2} percentage in the Xe/Cl{sub 2} gas mixture, dielectric capacitance, as well as gas pressure on the 308-nm photon generation, under typical experimental operating conditions, have been investigated and discussed. The importance of charged and excited species, including the major electronic and ionicmore » processes, is also demonstrated. The present calculations show clearly that the model predicts the optimal operating conditions and describes the electrical and chemical properties of the XeCl* exciplex lamp.« less
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
Dey, Dipesh K; Guha, Saumyen
2007-02-15
Phospholipid fatty acids (PLFAs) as biomarkers are well established in the literature. A general method based on least square approximation (LSA) was developed for the estimation of community structure from the PLFA signature of a mixed population where biomarker PLFA signatures of the component species were known. Fatty acid methyl ester (FAME) standards were used as species analogs and mixture of the standards as representative of the mixed population. The PLFA/FAME signatures were analyzed by gas chromatographic separation, followed by detection in flame ionization detector (GC-FID). The PLFAs in the signature were quantified as relative weight percent of the total PLFA. The PLFA signatures were analyzed by the models to predict community structure of the mixture. The LSA model results were compared with the existing "functional group" approach. Both successfully predicted community structure of mixed population containing completely unrelated species with uncommon PLFAs. For slightest intersection in PLFA signatures of component species, the LSA model produced better results. This was mainly due to inability of the "functional group" approach to distinguish the relative amounts of the common PLFA coming from more than one species. The performance of the LSA model was influenced by errors in the chromatographic analyses. Suppression (or enhancement) of a component's PLFA signature in chromatographic analysis of the mixture, led to underestimation (or overestimation) of the component's proportion in the mixture by the model. In mixtures of closely related species with common PLFAs, the errors in the common components were adjusted across the species by the model.
Anthracene + Pyrene Solid Mixtures: Eutectic and Azeotropic Character
Rice, James W.; Fu, Jinxia; Suuberg, Eric M.
2010-01-01
To better characterize the thermodynamic behavior of a binary polycyclic aromatic hydrocarbon mixture, thermochemical and vapor pressure experiments were used to examine the phase behavior of the anthracene (1) + pyrene (2) system. A solid-liquid phase diagram was mapped for the mixture. A eutectic point occurs at 404 K at x1 = 0.22. A model based on eutectic formation can be used to predict the enthalpy of fusion associated with the mixture. For mixtures that contain x1 < 0.90, the enthalpy of fusion is near that of pure pyrene. This and X-ray diffraction results indicate that mixtures of anthracene and pyrene have pyrene-like crystal structures and energetics until the composition nears that of pure anthracene. Solid-vapor equilibrium studies show that mixtures of anthracene and pyrene form solid azeotropes at x1 of 0.03 and 0.14. Additionally, mixtures at x1 = 0.99 sublime at the vapor pressure of pure anthracene, suggesting that anthracene behavior is not significantly influenced by x2 = 0.01 in the crystal structure. PMID:21116474
NASA Astrophysics Data System (ADS)
Shang, De-Yi; Zhong, Liang-Cai
2017-01-01
Our novel models for fluid's variable physical properties are improved and reported systematically in this work for enhancement of theoretical and practical value on study of convection heat and mass transfer. It consists of three models, namely (1) temperature parameter model, (2) polynomial model, and (3) weighted-sum model, respectively for treatment of temperature-dependent physical properties of gases, temperature-dependent physical properties of liquids, and concentration- and temperature-dependent physical properties of vapour-gas mixture. Two related components are proposed, and involved in each model for fluid's variable physical properties. They are basic physic property equations and theoretical similarity equations on physical property factors. The former, as the foundation of the latter, is based on the typical experimental data and physical analysis. The latter is built up by similarity analysis and mathematical derivation based on the former basic physical properties equations. These models are available for smooth simulation and treatment of fluid's variable physical properties for assurance of theoretical and practical value of study on convection of heat and mass transfer. Especially, so far, there has been lack of available study on heat and mass transfer of film condensation convection of vapour-gas mixture, and the wrong heat transfer results existed in widespread studies on the related research topics, due to ignorance of proper consideration of the concentration- and temperature-dependent physical properties of vapour-gas mixture. For resolving such difficult issues, the present novel physical property models have their special advantages.
NASA Astrophysics Data System (ADS)
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
Developing model asphalt systems using molecular simulation : final model.
DOT National Transportation Integrated Search
2009-09-01
Computer based molecular simulations have been used towards developing simple mixture compositions whose : physical properties resemble those of real asphalts. First, Monte Carlo simulations with the OPLS all-atom force : field were used to predict t...
ERIC Educational Resources Information Center
Bowles, Ben; Harlow, Iain M.; Meeking, Melissa M.; Kohler, Stefan
2012-01-01
It is widely accepted that signal-detection mechanisms contribute to item-recognition memory decisions that involve discriminations between targets and lures based on a controlled laboratory study episode. Here, the authors employed mathematical modeling of receiver operating characteristics (ROC) to determine whether and how a signal-detection…
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Yan, Luchun; Liu, Jiemin; Jiang, Shen; Wu, Chuandong; Gao, Kewei
2017-07-13
The olfactory evaluation function (e.g., odor intensity rating) of e-nose is always one of the most challenging issues in researches about odor pollution monitoring. But odor is normally produced by a set of stimuli, and odor interactions among constituents significantly influenced their mixture's odor intensity. This study investigated the odor interaction principle in odor mixtures of aldehydes and esters, respectively. Then, a modified vector model (MVM) was proposed and it successfully demonstrated the similarity of the odor interaction pattern among odorants of the same type. Based on the regular interaction pattern, unlike a determined empirical model only fit for a specific odor mixture in conventional approaches, the MVM distinctly simplified the odor intensity prediction of odor mixtures. Furthermore, the MVM also provided a way of directly converting constituents' chemical concentrations to their mixture's odor intensity. By combining the MVM with usual data-processing algorithm of e-nose, a new e-nose system was established for an odor intensity rating. Compared with instrumental analysis and human assessor, it exhibited accuracy well in both quantitative analysis (Pearson correlation coefficient was 0.999 for individual aldehydes ( n = 12), 0.996 for their binary mixtures ( n = 36) and 0.990 for their ternary mixtures ( n = 60)) and odor intensity assessment (Pearson correlation coefficient was 0.980 for individual aldehydes ( n = 15), 0.973 for their binary mixtures ( n = 24), and 0.888 for their ternary mixtures ( n = 25)). Thus, the observed regular interaction pattern is considered an important foundation for accelerating extensive application of olfactory evaluation in odor pollution monitoring.
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework
Aslam, Muhammad; Tahir, Muhammad; Hussain, Zawar; Al-Zahrani, Bander
2015-01-01
To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given. PMID:25993475
Mathematical modeling of a single stage ultrasonically assisted distillation process.
Mahdi, Taha; Ahmad, Arshad; Ripin, Adnan; Abdullah, Tuan Amran Tuan; Nasef, Mohamed M; Ali, Mohamad W
2015-05-01
The ability of sonication phenomena in facilitating separation of azeotropic mixtures presents a promising approach for the development of more intensified and efficient distillation systems than conventional ones. To expedite the much-needed development, a mathematical model of the system based on conservation principles, vapor-liquid equilibrium and sonochemistry was developed in this study. The model that was founded on a single stage vapor-liquid equilibrium system and enhanced with ultrasonic waves was coded using MATLAB simulator and validated with experimental data for ethanol-ethyl acetate mixture. The effects of both ultrasonic frequency and intensity on the relative volatility and azeotropic point were examined, and the optimal conditions were obtained using genetic algorithm. The experimental data validated the model with a reasonable accuracy. The results of this study revealed that the azeotropic point of the mixture can be totally eliminated with the right combination of sonication parameters and this can be utilized in facilitating design efforts towards establishing a workable ultrasonically intensified distillation system. Copyright © 2014 Elsevier B.V. All rights reserved.
Large-eddy simulation of turbulent cavitating flow in a micro channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Hickel, Stefan; Schmidt, Steffen J.
2014-08-15
Large-eddy simulations (LES) of cavitating flow of a Diesel-fuel-like fluid in a generic throttle geometry are presented. Two-phase regions are modeled by a parameter-free thermodynamic equilibrium mixture model, and compressibility of the liquid and the liquid-vapor mixture is taken into account. The Adaptive Local Deconvolution Method (ALDM), adapted for cavitating flows, is employed for discretizing the convective terms of the Navier-Stokes equations for the homogeneous mixture. ALDM is a finite-volume-based implicit LES approach that merges physically motivated turbulence modeling and numerical discretization. Validation of the numerical method is performed for a cavitating turbulent mixing layer. Comparisons with experimental data ofmore » the throttle flow at two different operating conditions are presented. The LES with the employed cavitation modeling predicts relevant flow and cavitation features accurately within the uncertainty range of the experiment. The turbulence structure of the flow is further analyzed with an emphasis on the interaction between cavitation and coherent motion, and on the statistically averaged-flow evolution.« less
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
NASA Astrophysics Data System (ADS)
Gulliver, Eric A.
The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.
Protein and gene model inference based on statistical modeling in k-partite graphs.
Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter
2010-07-06
One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.
Tian, Liang; Russell, Alan; Anderson, Iver
2014-01-03
Deformation processed metal–metal composites (DMMCs) are high-strength, high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases. The extraordinarily high strength of DMMCs is underestimated using the rule of mixture (or volumetric weighted average) of conventionally work-hardened metals. A dislocation-density-based, strain–gradient–plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of DMMCs. The model prediction was compared with our experimental findings of Cu–Nb, Cu–Ta, and Al–Ti DMMC systems to verify the applicability of the new model. The results show that this model predicts themore » strength of DMMCs better than the rule-of-mixture model. The strain-gradient effect, responsible for the exceptionally high strength of heavily cold worked DMMCs, is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length.« less
NASA Astrophysics Data System (ADS)
Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.
2017-10-01
Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
An EQT-based cDFT approach for thermodynamic properties of confined fluid mixtures
NASA Astrophysics Data System (ADS)
Motevaselian, M. H.; Aluru, N. R.
2017-04-01
We present an empirical potential-based quasi-continuum theory (EQT) to predict the structure and thermodynamic properties of confined fluid mixtures. The central idea in the EQT is to construct potential energies that integrate important atomistic details into a continuum-based model such as the Nernst-Planck equation. The EQT potentials can be also used to construct the excess free energy functional, which is required for the grand potential in the classical density functional theory (cDFT). In this work, we use the EQT-based grand potential to predict various thermodynamic properties of a confined binary mixture of hydrogen and methane molecules inside graphene slit channels of different widths. We show that the EQT-cDFT predictions for the structure, surface tension, solvation force, and local pressure tensor profiles are in good agreement with the molecular dynamics simulations. Moreover, we study the effect of different bulk compositions and channel widths on the thermodynamic properties. Our results reveal that the composition of methane in the mixture can significantly affect the ordering of molecules and thermodynamic properties under confinement. In addition, we find that graphene is selective to methane molecules.
Molecular Dynamics Evaluation of Dielectric-Constant Mixing Rules for H2O-CO2 at Geologic Conditions
Mountain, Raymond D.; Harvey, Allan H.
2015-01-01
Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H2O–CO2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range. PMID:26664009
Mountain, Raymond D; Harvey, Allan H
2015-10-01
Modeling of mineral reaction equilibria and aqueous-phase speciation of C-O-H fluids requires the dielectric constant of the fluid mixture, which is not known from experiment and is typically estimated by some rule for mixing pure-component values. In order to evaluate different proposed mixing rules, we use molecular dynamics simulation to calculate the dielectric constant of a model H 2 O-CO 2 mixture at temperatures of 700 K and 1000 K at pressures up to 3 GPa. We find that theoretically based mixing rules that depend on combining the molar polarizations of the pure fluids systematically overestimate the dielectric constant of the mixture, as would be expected for mixtures of nonpolar and strongly polar components. The commonly used semiempirical mixing rule due to Looyenga works well for this system at the lower pressures studied, but somewhat underestimates the dielectric constant at higher pressures and densities, especially at the water-rich end of the composition range.
Meat mixture detection in Iberian pork sausages.
Ortiz-Somovilla, V; España-España, F; De Pedro-Sanz, E J; Gaitán-Jurado, A J
2005-11-01
Five homogenized meat mixture treatments of Iberian (I) and/or Standard (S) pork were set up. Each treatment was analyzed by NIRS as a fresh product (N=75) and as dry-cured sausage (N=75). Spectra acquisition was carried out using DA 7000 equipment (Perten Instruments), obtaining a total of 750 spectra. Several absorption peaks and bands were selected as the most representative for homogenized dry-cured and fresh sausages. Discriminant analysis and mixture prediction equations were carried out based on the spectral data gathered. The best results using discriminant models were for fresh products, with 98.3% (calibration) and 60% (validation) correct classification. For dry-cured sausages 91.7% (calibration) and 80% (validation) of the samples were correctly classified. Models developed using mixture prediction equations showed SECV=4.7, r(2)=0.98 (calibration) and 73.3% of validation set were correctly classified for the fresh product. These values for dry-cured sausages were SECV=5.9, r(2)=0.99 (calibration) and 93.3% correctly classified for validation.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
NASA Astrophysics Data System (ADS)
Boichenko, A. M.; Klenovskii, M. S.
2015-12-01
By using the previously developed kinetic model, we have carried out simulations to study the possibility of laser generation of XeCl exciplex molecules in the working medium based on a mixture of Xe with CsCl vapours, excited by a longitudinal repetitively pulsed discharge. The formation mechanism of exciplex molecules in this mixture is fundamentally different from the formation mechanisms in the traditional mixtures of exciplex lasers. The conditions that make the laser generation possible are discussed. For these conditions, with allowance for available specific experimental conditions of the repetitively pulsed discharge excitation, we have obtained the calculated dependences of the power and efficiency of generation on the reflectivity of mirrors in a laser cavity.
Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio
2016-01-01
To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.
Inferring Short-Range Linkage Information from Sequencing Chromatograms
Beggel, Bastian; Neumann-Fraune, Maria; Kaiser, Rolf; Verheyen, Jens; Lengauer, Thomas
2013-01-01
Direct Sanger sequencing of viral genome populations yields multiple ambiguous sequence positions. It is not straightforward to derive linkage information from sequencing chromatograms, which in turn hampers the correct interpretation of the sequence data. We present a method for determining the variants existing in a viral quasispecies in the case of two nearby ambiguous sequence positions by exploiting the effect of sequence context-dependent incorporation of dideoxynucleotides. The computational model was trained on data from sequencing chromatograms of clonal variants and was evaluated on two test sets of in vitro mixtures. The approach achieved high accuracies in identifying the mixture components of 97.4% on a test set in which the positions to be analyzed are only one base apart from each other, and of 84.5% on a test set in which the ambiguous positions are separated by three bases. In silico experiments suggest two major limitations of our approach in terms of accuracy. First, due to a basic limitation of Sanger sequencing, it is not possible to reliably detect minor variants with a relative frequency of no more than 10%. Second, the model cannot distinguish between mixtures of two or four clonal variants, if one of two sets of linear constraints is fulfilled. Furthermore, the approach requires repetitive sequencing of all variants that might be present in the mixture to be analyzed. Nevertheless, the effectiveness of our method on the two in vitro test sets shows that short-range linkage information of two ambiguous sequence positions can be inferred from Sanger sequencing chromatograms without any further assumptions on the mixture composition. Additionally, our model provides new insights into the established and widely used Sanger sequencing technology. The source code of our method is made available at http://bioinf.mpi-inf.mpg.de/publications/beggel/linkageinformation.zip. PMID:24376502
van Wijk, Michiel; de Bruijn, Paulien J A; Sabelis, Maurice W
2010-11-01
Phytoseiulus persimilis is a predatory mite that in absence of vision relies on the detection of herbivore-induced plant odors to locate its prey, the two-spotted spider-mite Tetranychus urticae. This herbivorous prey is feeding on leaves of a wide variety of plant species in different families. The predatory mites respond to numerous structurally different compounds. However, typical spider-mite induced plant compounds do not attract more predatory mites than plant compounds not associated with prey. Because the mites are sensitive to many compounds, components of odor mixtures may affect each other's perception. Although the response to pure compounds has been well documented, little is known how interactions among compounds affect the response to odor mixtures. We assessed the relation between the mites' responses elicited by simple mixtures of two compounds and by the single components of these mixtures. The preference for the mixture was compared to predictions under three conceptual models, each based on one of the following assumptions: (1) the responses elicited by each of the individual components can be added to each other; (2) they can be averaged; or (3) one response overshadows the other. The observed response differed significantly from the response predicted under the additive response, average response, and overshadowing response model in 52, 36, and 32% of the experimental tests, respectively. Moreover, the behavioral responses elicited by individual compounds and their binary mixtures were determined as a function of the odor concentration. The relative contribution of each component to the behavioral response elicited by the mixture varied with the odor concentration, even though the ratio of both compounds in the mixture was kept constant. Our experiments revealed that compounds that elicited no response had an effect on the response elicited by binary mixtures that they were part of. The results are not consistent with the hypothesis that P. persimilis perceives odor mixtures as a collection of strictly elemental objects. They suggest that odor mixtures rather are perceived as one synthetic whole.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram
2015-01-01
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D
2018-06-01
The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.
NASA Astrophysics Data System (ADS)
Jesenska, Sona; Liess, Mathias; Schäfer, Ralf; Beketov, Mikhail; Blaha, Ludek
2013-04-01
Species sensitivity distribution (SSD) is statistical method broadly used in the ecotoxicological risk assessment of chemicals. Originally it has been used for prospective risk assessment of single substances but nowadays it is becoming more important also in the retrospective risk assessment of mixtures, including the catchment scale. In the present work, SSD predictions (impacts of mixtures consisting of 25 pesticides; data from several catchments in Germany, France and Finland) were compared with SPEAR-pesticides, which a bioindicator index based on biological traits responsive to the effects of pesticides and post-contamination recovery. The results showed statistically significant correlations (Pearson's R, p<0.01) between SSD (predicted msPAF values) and values of SPEAR-pesticides (based on field biomonitoring observations). Comparisons of the thresholds established for the SSD and SPEAR approaches (SPEAR-pesticides=45%, i.e. LOEC level, and msPAF = 0.05 for SSD, i.e. HC5) showed that use of chronic toxicity data significantly improved the agreement between the two methods but the SPEAR-pesticides index was still more sensitive. Taken together, the validation study shows good potential of SSD models in predicting the real impacts of micropollutant mixtures on natural communities of aquatic biota.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Budi Astuti, Ani; Iriawan, Nur; Irhamah; Kuswanto, Heri; Sasiarini, Laksmi
2017-10-01
Bayesian statistics proposes an approach that is very flexible in the number of samples and distribution of data. Bayesian Mixture Model (BMM) is a Bayesian approach for multimodal models. Diabetes Mellitus (DM) is more commonly known in the Indonesian community as sweet pee. This disease is one type of chronic non-communicable diseases but it is very dangerous to humans because of the effects of other diseases complications caused. WHO reports in 2013 showed DM disease was ranked 6th in the world as the leading causes of human death. In Indonesia, DM disease continues to increase over time. These research would be studied patterns and would be built the BMM models of the DM data through simulation studies where the simulation data built on cases of blood sugar levels of DM patients in RSUD Saiful Anwar Malang. The results have been successfully demonstrated pattern of distribution of the DM data which has a normal mixture distribution. The BMM models have succeed to accommodate the real condition of the DM data based on the data driven concept.
Rock Content Influence on Soil Hydraulic Properties
NASA Astrophysics Data System (ADS)
Parajuli, K.; Sadeghi, M.; Jones, S. B.
2015-12-01
Soil hydraulic properties including the soil water retention curve (SWRC) and hydraulic conductivity function are important characteristics of soil affecting a variety of soil properties and processes. The hydraulic properties are commonly measured for seived soils (i.e. particles < 2 mm), but many natural soils include rock fragments of varying size that alter bulk hydraulic properties. Relatively few studies have addressed this important problem using physically-based concepts. Motivated by this knowledge gap, we set out to describe soil hydraulic properties using binary mixtures (i.e. rock fragment inclusions in a soil matrix) based on individual properties of the rock and soil. As a first step of this study, special attention was devoted to the SWRC, where the impact of rock content on the SWRC was quantified using laboratory experiments for six different mixing ratios of soil matrix and rock. The SWRC for each mixture was obtained from water mass and water potential measurements. The resulting data for the studied mixtures yielded a family of SWRC indicating how the SWRC of the mixture is related to that of the individual media, i.e., soil and rock. A consistent model was also developed to describe the hydraulic properties of the mixture as a function of the individual properties of the rock and soil matrix. Key words: Soil hydraulic properties, rock content, binary mixture, experimental data.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
BahramParvar, Maryam; Tehrani, Mostafa Mazaheri; Razavi, Seyed M A; Koocheki, Arash
2015-03-01
This study aimed to obtain the optimum formulation for stabilizers in ice cream that could contest with blends presented nowadays. Thus, different mixtures of three stabilizers, i.e. basil seed gum, carboxymethyl cellulose, and guar gum, at two concentrations (0.15 % & 0.35 %) were studied using mixture design methodology. The influence of these mixtures on some properties of ice cream and the regression models for them were also determined. Generally, high ratios of basil seed gum in mixture developed the apparent viscosity of ice cream mixes and decreased the melting rate. Increasing proportion of this stabilizer as well as guar gum in the mixtures at concentration of 0.15 % enhanced the overrun of samples. Based on the optimization criteria, the most excellent combination was 84.43 % basil seed gum and 15.57 % guar gum at concentration of 0.15 %. This research proved the capability of basil seed gum as a novel stabilizer in ice cream stabilization.
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Green, Robert O.; Sabol, Donald E.; Adams, John B.
1993-01-01
Imaging spectrometry offers a new way of deriving ecological information about vegetation communities from remote sensing. Applications include derivation of canopy chemistry, measurement of column atmospheric water vapor and liquid water, improved detectability of materials, more accurate estimation of green vegetation cover and discrimination of spectrally distinct green leaf, non-photosynthetic vegetation (NPV: litter, wood, bark, etc.) and shade spectra associated with different vegetation communities. Much of our emphasis has been on interpreting Airborne Visible/Infrared Imaging Spectrometry (AVIRIS) data spectral mixtures. Two approaches have been used, simple models, where the data are treated as a mixture of 3 to 4 laboratory/field measured spectra, known as reference endmembers (EM's), applied uniformly to the whole image, to more complex models where both the number of EM's and the types of EM's vary on a per-pixel basis. Where simple models are applied, materials, such as NPV, which are spectrally similar to soils, can be discriminated on the basis of residual spectra. One key aspect is that the data are calibrated to reflectance and modeled as mixtures of reference EM's, permitting temporal comparison of EM fractions, independent of scene location or data type. In previous studies the calibration was performed using a modified-empirical line calibration, assuming a uniform atmosphere across the scene. In this study, a Modtran-based calibration approach was used to map liquid water and atmospheric water vapor and retrieve surface reflectance from three AVIRIS scenes acquired in 1992 over the Jasper Ridge Biological Preserve. The data were acquired on June 2nd, September 4th and October 6th. Reflectance images were analyzed as spectral mixtures of reference EM's using a simple 4 EM model. Atmospheric water vapor derived from Modtran was compared to elevation, and community type. Liquid water was compare to the abundance of NPV, Shade and Green Vegetation (VG) for select sites to determine whether a relationship existed, and under what conditions the relationship broke down. Temporal trends in endmember fractions, liquid water and atmospheric water vapor were investigated also. The combination of spectral mixture analysis and the Modtran based atmospheric/liquid water models was used to develop a unique vegetation community description.
Mathematical modeling of a radio-frequency path for IEEE 802.11ah based wireless sensor networks
NASA Astrophysics Data System (ADS)
Tyshchenko, Igor; Cherepanov, Alexander; Dmitrii, Vakhnin; Popova, Mariia
2017-09-01
This article discusses the process of creating the mathematical model of a radio-frequency path for an IEEE 802.11ah based wireless sensor networks using M atLab Simulink CAD tools. In addition, it describes occurring perturbing effects and determining the presence of a useful signal in the received mixture.
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
Thomas, Cory; Lu, Xinyu; Todd, Andrew; Raval, Yash; Tzeng, Tzuen-Rong; Song, Yongxin; Wang, Junsheng; Li, Dongqing; Xuan, Xiangchun
2017-01-01
The separation of particles and cells in a uniform mixture has been extensively studied as a necessity in many chemical and biomedical engineering and research fields. This work demonstrates a continuous charge-based separation of fluorescent and plain spherical polystyrene particles with comparable sizes in a ψ-shaped microchannel via the wall-induced electrical lift. The effects of both the direct current electric field in the main-branch and the electric field ratio in between the inlet branches for sheath fluid and particle mixture are investigated on this electrokinetic particle separation. A Lagrangian tracking method based theoretical model is also developed to understand the particle transport in the microchannel and simulate the parametric effects on particle separation. Moreover, the demonstrated charge-based separation is applied to a mixture of yeast cells and polystyrene particles with similar sizes. Good separation efficiency and purity are achieved for both the cells and the particles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Silva-Fernandes, Talita; Duarte, Luís Chorão; Carvalheiro, Florbela; Loureiro-Dias, Maria Conceição; Fonseca, César; Gírio, Francisco
2015-05-01
This work studied the processing of biomass mixtures containing three lignocellulosic materials largely available in Southern Europe, eucalyptus residues (ER), wheat straw (WS) and olive tree pruning (OP). The mixtures were chemically characterized, and their pretreatment, by autohydrolysis, evaluated within a severity factor (logR0) ranging from 1.73 up to 4.24. A simple modeling strategy was used to optimize the autohydrolysis conditions based on the chemical characterization of the liquid fraction. The solid fraction was characterized to quantify the polysaccharide and lignin content. The pretreatment conditions for maximal saccharides recovery in the liquid fraction were at a severity range (logR0) of 3.65-3.72, independently of the mixture tested, which suggests that autohydrolysis can effectively process mixtures of lignocellulosic materials for further biochemical conversion processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Equations of state and transport properties of mixtures in the warm dense regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yong; Dai, Jiayu; Kang, Dongdong
2015-02-15
We have performed average-atom molecular dynamics to simulate the CH and LiH mixtures in the warm dense regime, and obtained equations of state and the ionic transport properties. The electronic structures are calculated by using the modified average-atom model, which have included the broadening of energy levels, and the ion-ion pair potentials of mixtures are constructed based on the temperature-dependent density functional theory. The ionic transport properties, such as ionic diffusion and shear viscosity, are obtained through the ionic velocity correlation functions. The equations of state and transport properties for carbon, hydrogen and lithium, hydrogen mixtures in a wide regionmore » of density and temperature are calculated. Through our computing the average ionization degree, average ion-sphere diameter and transition properties in the mixture, it is shown that transport properties depend not only on the ionic mass but also on the average ionization degree.« less
NASA Astrophysics Data System (ADS)
Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew
2017-06-01
Segregation of granules is an undesired phenomenon in which particles in a mixture separate from each other based on the differences in their physical and chemical properties. It is, therefore, crucial to control the homogeneity of the system by applying appropriate techniques. This requires a fundamental understanding of the underlying mechanisms. In this study, the effect of particle shape and cohesion has been analysed. As a model system prone to segregation, a ternary mixture of particles representing the common ingredients of home washing powders, namely, spray dried detergent powders, tetraacetylethylenediamine, and enzyme placebo (as the minor ingredient) during heap formation is modelled numerically by the Discrete Element Method (DEM) with an aim to investigate the effect of cohesion/adhesion of the minor components on segregation quality. Non-spherical particle shapes are created in DEM using the clumped-sphere method based on their X-ray tomograms. Experimentally, inter particle adhesion is generated by coating the minor ingredient (enzyme placebo) with Polyethylene Glycol 400 (PEG 400). The JKR theory is used to model the cohesion/adhesion of coated enzyme placebo particles in the simulation. Tests are carried out experimentally and simulated numerically by mixing the placebo particles (uncoated and coated) with the other ingredients and pouring them in a test box. The simulation and experimental results are compared qualitatively and quantitatively. It is found that coating the minor ingredient in the mixture reduces segregation significantly while the change in flowability of the system is negligible.
Brown, Colin D.; de Zwart, Dick; Diamond, Jerome; Dyer, Scott D.; Holmes, Christopher M.; Marshall, Stuart; Burton, G. Allen
2018-01-01
Abstract Ecological risk assessment increasingly focuses on risks from chemical mixtures and multiple stressors because ecosystems are commonly exposed to a plethora of contaminants and nonchemical stressors. To simplify the task of assessing potential mixture effects, we explored 3 land use–related chemical emission scenarios. We applied a tiered methodology to judge the implications of the emissions of chemicals from agricultural practices, domestic discharges, and urban runoff in a quantitative model. The results showed land use–dependent mixture exposures, clearly discriminating downstream effects of land uses, with unique chemical “signatures” regarding composition, concentration, and temporal patterns. Associated risks were characterized in relation to the land‐use scenarios. Comparisons to measured environmental concentrations and predicted impacts showed relatively good similarity. The results suggest that the land uses imply exceedances of regulatory protective environmental quality standards, varying over time in relation to rain events and associated flow and dilution variation. Higher‐tier analyses using ecotoxicological effect criteria confirmed that species assemblages may be affected by exposures exceeding no‐effect levels and that mixture exposure could be associated with predicted species loss under certain situations. The model outcomes can inform various types of prioritization to support risk management, including a ranking across land uses as a whole, a ranking on characteristics of exposure times and frequencies, and various rankings of the relative role of individual chemicals. Though all results are based on in silico assessments, the prospective land use–based approach applied in the present study yields useful insights for simplifying and assessing potential ecological risks of chemical mixtures and can therefore be useful for catchment‐management decisions. Environ Toxicol Chem 2018;37:715–728. © 2017 The Authors. Environmental Toxicology Chemistry Published by Wiley Periodicals, Inc. PMID:28845901
Item selection via Bayesian IRT models.
Arima, Serena
2015-02-10
With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.
Accuracy assessment of linear spectral mixture model due to terrain undulation
NASA Astrophysics Data System (ADS)
Wang, Tianxing; Chen, Songlin; Ma, Ya
2008-12-01
Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.
In vitro screening for population variability in toxicity of pesticide-containing mixtures
Abdo, Nour; Wetmore, Barbara A.; Chappell, Grace A.; Shea, Damian; Wright, Fred A.; Rusyna, Ivan
2016-01-01
Population-based human in vitro models offer exceptional opportunities for evaluating the potential hazard and mode of action of chemicals, as well as variability in responses to toxic insults among individuals. This study was designed to test the hypothesis that comparative population genomics with efficient in vitro experimental design can be used for evaluation of the potential for hazard, mode of action, and the extent of population variability in responses to chemical mixtures. We selected 146 lymphoblast cell lines from 4 ancestrally and geographically diverse human populations based on the availability of genome sequence and basal RNA-seq data. Cells were exposed to two pesticide mixtures – an environmental surface water sample comprised primarily of organochlorine pesticides and a laboratory-prepared mixture of 36 currently used pesticides – in concentration response and evaluated for cytotoxicity. On average, the two mixtures exhibited a similar range of in vitro cytotoxicity and showed considerable inter-individual variability across screened cell lines. However, when in vitroto-in vivo extrapolation (IVIVE) coupled with reverse dosimetry was employed to convert the in vitro cytotoxic concentrations to oral equivalent doses and compared to the upper bound of predicted human exposure, we found that a nominally more cytotoxic chlorinated pesticide mixture is expected to have greater margin of safety (more than 5 orders of magnitude) as compared to the current use pesticide mixture (less than 2 orders of magnitude) due primarily to differences in exposure predictions. Multivariate genome-wide association mapping revealed an association between the toxicity of current use pesticide mixture and a polymorphism in rs1947825 in C17orf54. We conclude that a combination of in vitro human population-based cytotoxicity screening followed by dosimetric adjustment and comparative population genomics analyses enables quantitative evaluation of human health hazard from complex environmental mixtures. Additionally, such an approach yields testable hypotheses regarding potential toxicity mechanisms. PMID:26386728
CORRELATION OF THE GLASS TRANSITION TEMPERATURE OF PLASTICIZED PVC USING A LATTICE FLUID MODEL
A model has been developed to describe the composition dependence of the glass transition temperature (Tg) of polyvinyl chloride (PVC) + plasticizer mixtures. The model is based on Sanchez-Lacombe equation of state and the Gibbs-Di Marzio criterion, which states that th...
ERIC Educational Resources Information Center
Eichinger, John
2005-01-01
Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…
Falchetto, Augusto Cannone; Moon, Ki Hoon; Wistuba, Michael P
2014-09-02
The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP), and, more recently, Recycled Asphalt Shingles (RAS) on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP) and modeling of rheological data obtained with the Bending Beam Rheometer (BBR). Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL) of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.
Cannone Falchetto, Augusto; Moon, Ki Hoon; Wistuba, Michael P.
2014-01-01
The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP), and, more recently, Recycled Asphalt Shingles (RAS) on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP) and modeling of rheological data obtained with the Bending Beam Rheometer (BBR). Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL) of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder. PMID:28788190
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Lebrun, Jérémie D; Uher, Emmanuelle; Fechner, Lise C
2017-12-01
Metals are usually present as mixtures at low concentrations in aquatic ecosystems. However, the toxicity and sub-lethal effects of metal mixtures on organisms are still poorly addressed in environmental risk assessment. Here we investigated the biochemical and behavioural responses of Gammarus fossarum to Cu, Cd, Ni, Pb and Zn tested individually or in mixture (M2X) at concentrations twice the levels of environmental quality standards (EQSs) from the European Water Framework Directive. The same metal mixture was also tested with concentrations equivalent to EQSs (M1X), thus in a regulatory context, as EQSs are proposed to protect aquatic biota. For each exposure condition, mortality, locomotion, respiration and enzymatic activities involved in digestive metabolism and moult were monitored over a 120h exposure period. Multi-metric variations were summarized by the integrated biomarker response index (IBR). Mono-metallic exposures shed light on biological alterations occurring at environmental exposure levels in gammarids and depending on the considered metal and gender. As regards mixtures, biomarkers were altered for both M2X and M1X. However, no additive or synergistic effect of metals was observed comparing to mono-metallic exposures. Indeed, bioaccumulation data highlighted competitive interactions between metals in M2X, decreasing subsequently their internalisation and toxicity. IBR values indicated that the health of gammarids was more impacted by M1X than M2X, because of reduced competitions and enhanced uptakes of metals for the mixture at lower, EQS-like concentrations. Models using bioconcentration data obtained from mono-metallic exposures generated successful predictions of global toxicity both for M1X and M2X. We conclude that sub-lethal effects of mixtures identified by the multi-biomarker approach can lead to disturbances in population dynamics of gammarids. Although IBR-based models offer promising lines of enquiry to predict metal mixture toxicity, further studies are needed to confirm their predictive quality on larger ranges of metallic combinations before their use in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Buckley, Barbara; Farraj, Aimen
2015-09-01
Air pollution consists of a complex mixture of particulate and gaseous components. Individual criteria and other hazardous air pollutants have been linked to adverse respiratory and cardiovascular health outcomes. However, assessing risk of air pollutant mixtures is difficult since components are present in different combinations and concentrations in ambient air. Recent mechanistic studies have limited utility because of the inability to link measured changes to adverse outcomes that are relevant to risk assessment. New approaches are needed to address this challenge. The purpose of this manuscript is to describe a conceptual model, based on the adverse outcome pathway approach, which connects initiating events at the cellular and molecular level to population-wide impacts. This may facilitate hazard assessment of air pollution mixtures. In the case reports presented here, airway hyperresponsiveness and endothelial dysfunction are measurable endpoints that serve to integrate the effects of individual criteria air pollutants found in inhaled mixtures. This approach incorporates information from experimental and observational studies into a sequential series of higher order effects. The proposed model has the potential to facilitate multipollutant risk assessment by providing a framework that can be used to converge the effects of air pollutants in light of common underlying mechanisms. This approach may provide a ready-to-use tool to facilitate evaluation of health effects resulting from exposure to air pollution mixtures. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-07-01
The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.
Effect of the oxygen balance on ignition and detonation properties of liquid explosive mixtures
NASA Astrophysics Data System (ADS)
Genetier, M.; Osmont, A.; Baudin, G.
2014-05-01
The objective is to compare the ignition and detonation properties of various liquid high explosives having negative up to positive oxygen balance (OB): nitromethane (OB < 0), saccharose and hydrogen peroxide based mixture (quasi nil OB), hydrogen peroxide with more than 90% purity (OB > 0). The decomposition kinetic rates and the equations of state (EOS) for the liquid mixtures and detonation products (DP) are the input data for a detonation model. EOS are theoretically determined using the Woolfolk et al. universal liquid polar shock law and thermochemical computations for DP. The decomposition kinetic rate laws are determined to reproduce the shock to detonation transition for the mixtures submitted to planar plate impacts. Such a model is not sufficient to compute open field explosions. The aerial overpressure is well reproduced in the first few microseconds, however, after it becomes worse at large expansion of the fireball and the impulse is underestimated. The problem of the DP EOS alone is that it takes only the detonation into account, the secondary combustion DP - air is not considered. To solve this problem a secondary combustion model has been developed to take the OB effect into account. The detonation model has been validated on planar plate impact experiments. The secondary combustion parameters were deduced from thermochemical computations. The whole model has been used to predict the effects of the oxygen balance on open air blast effects of spherical charges.
Effect of the oxygen balance on ignition and detonation properties of liquid explosive mixtures
NASA Astrophysics Data System (ADS)
Genetier, Marc; Osmont, Antoine; Baudin, Gerard
2013-06-01
The objective is to compare ignition and detonation properties of various liquid high explosives having negative up to positive oxygen balance (OB): nitromethane (OB < 0), saccharose and hydrogen peroxide based mixture (quasi nil OB), hydrogen peroxide with more than 90% purity (OB > 0). The decomposition kinetic rates and the equations of state (EOS) for the liquid mixtures and detonation products (DP) are the input data for a detonation model. EOS are theoretically determined using the Woolfolk et al universal liquid polar shock law and thermochemical computations for DP. The decomposition kinetic rate laws are determined to reproduce the shock to detonation transition for the mixtures submitted to planar plate impacts. Such a model is not sufficient to compute open field explosions. The aerial overpressure is well reproduced in the first microseconds, however, after it becomes worse at large expansion of the fireball and the impulse is underestimated. The problem of the DP EOS alone is that it takes into account only the detonation, the secondary combustion DP - air being not considered. To solve this problem a secondary combustion model has been developed to take into account the OB effect. The detonation model has been validated on planar plate impact experiments. The secondary combustion parameters were deduced from thermochemical computations. The whole model has been used to predict the effects of the oxygen balance on open air blast effects of spherical charges.
Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion
NASA Astrophysics Data System (ADS)
Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha
Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Vapor mediated droplet interactions - models and mechanisms (Part 2)
NASA Astrophysics Data System (ADS)
Benusiglio, Adrien; Cira, Nate; Prakash, Manu
2014-11-01
When deposited on clean glass a two-component binary mixture of propylene glycol and water is energetically inclined to spread, as both pure liquids do. Instead the mixture forms droplets stabilized by evaporation induced surface tension gradients, giving them unique properties such as negligible hysteresis. When two of these special droplets are deposited several radii apart they attract each other. The vapor from one droplet destabilizes the other, resulting in an attraction force which brings both droplets together. We present a flux-based model for droplet stabilization and a model which connects the vapor profile to net force. These simple models capture the static and dynamic experimental trends, and our fundamental understanding of these droplets and their interactions allowed us to build autonomous fluidic machines.
Proportioning and performance evaluation of self-consolidating concrete
NASA Astrophysics Data System (ADS)
Wang, Xuhao
A well-proportioned self-consolidating concrete (SCC) mixture can be achieved by controlling the aggregate system, paste quality, and paste quantity. The work presented in this dissertation involves an effort to study and improve particle packing of the concrete system and reduce the paste quantity while maintaining concrete quality and performance. This dissertation is composed of four papers resulting from the study: (1) Assessing Particle Packing Based Self-Consolidating Concrete Mix Design; (2) Using Paste-To-Voids Volume Ratio to Evaluate the Performance of Self-Consolidating Concrete Mixtures; (3) Image Analysis Applications on Assessing Static Stability and Flowability of Self-Consolidating Concrete, and (4) Using Ultrasonic Wave Propagation to Monitor Stiffening Process of Self-Consolidating Concrete. Tests were conducted on a large matrix of SCC mixtures that were designed for cast-in-place bridge construction. The mixtures were made with different aggregate types, sizes, and different cementitious materials. In Paper 1, a modified particle-packing based mix design method, originally proposed by Brouwers (2005), was applied to the design of self-consolidating concrete (SCC) mixs. Using this method, a large matrix of SCC mixes was designed to have a particle distribution modulus (q) ranging from 0.23 to 0.29. Fresh properties (such as flowability, passing ability, segregation resistance, yield stress, viscosity, set time and formwork pressure) and hardened properties (such as compressive strength, surface resistance, shrinkage, and air structure) of these concrete mixes were experimentally evaluated. In Paper 2, a concept that is based on paste-to-voids volume ratio (Vpaste/Vvoids) was employed to assess the performance of SCC mixtures. The relationship between excess paste theory and Vpaste/Vvoids was investigated. The workability, flow properties, compressive strength, shrinkage, and surface resistivity of SCC mixtures were determined at various ages. Statistical analyses, response surface models and Tukey Honestly Significant Difference (HSD) tests, were conducted to relate the mix design parameters to the concrete performance. The work discussed in Paper 3 was to apply a digital image processing (DIP) method associated with a MATLAB algorithm to evaluate cross sectional images of self-consolidating concrete (SCC). Parameters, such as inter-particle spacing between coarse aggregate particles and average mortar to aggregate ratio defined as average mortar thickness index (MTI), were derived from DIP method and applied to evaluate the static stability and develop statistical models to predict flowability of SCC mixtures. The last paper investigated technologies available to monitor changing properties of a fresh mixture, particularly for use with self-consolidating concrete (SCC). A number of techniques were used to monitor setting time, stiffening and formwork pressure of SCC mixtures. These included longitudinal (P-wave) ultrasonic wave propagation, penetrometer based setting time, semi-adiabatic calorimetry, and formwork pressure. The first study demonstrated that the concrete mixes designed using the modified Brouwers mix design algorithm and particle packing concept had a potential to reduce up to 20% SCMs content compared to existing SCC mix proportioning methods and still maintain good performance. The second paper concluded that slump flow of the SCC mixtures increased with Vpaste/Vvoids at a given viscosity of mortar. Compressive trength increases with increasing Vpaste/Vvoids up to a point (~150%), after which the strength becomes independent of Vpaste/Vvoids, even slightly decreases. Vpaste/Vvoids has little effect on the shrinkage mixtures, while SCC mixtures tend to have a higher shrinkage than CC for a given Vpaste/Vvoids. Vpaste/Vvoids has little effects on surface resistivity of SCC mixtures. The paste quality tends to have a dominant effect. Statistical analysis is an efficient tool to identify the significance of influence factors on concrete performance. In third paper, proposed DIP method and MATLAB algorithm can be successfully used to derive inter-particle spacing and MTI, and quantitatively evaluate the static stability in hardened SCC samples. These parameters can be applied to overcome the limitations and challenges of existing theoretical frames and construct statistical models associated with rheological parameters to predict flowability of SCC mixtures. The outcome of this study can be of practical value for providing an efficient and useful tool in designing mixture proportions of SCC. Last paper compared several concrete performance measurement techniques, the P-wave test and calorimetric measurements can be efficiently used to monitor the stiffening and setting of SCC mixtures.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda
2014-12-19
The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less
Emotional Intelligence: What the Research Says.
ERIC Educational Resources Information Center
Cobb, Casey D.; Mayer, John D.
2000-01-01
Educational practices involving emotional intelligence should be based on solid research, not sensationalistic claims. There are two emotional-intelligence models based on ability and an ability/social-competence mixture. Emphasizing cooperative behavior could stifle creativity, healthy skepticism, or spontaneity. Teaching emotional reasoning pays…
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS
McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.
2014-01-01
The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Li, Bin; Chen, Kan; Tian, Lianfang; Yeboah, Yao; Ou, Shanxing
2013-01-01
The segmentation and detection of various types of nodules in a Computer-aided detection (CAD) system present various challenges, especially when (1) the nodule is connected to a vessel and they have very similar intensities; (2) the nodule with ground-glass opacity (GGO) characteristic possesses typical weak edges and intensity inhomogeneity, and hence it is difficult to define the boundaries. Traditional segmentation methods may cause problems of boundary leakage and "weak" local minima. This paper deals with the above mentioned problems. An improved detection method which combines a fuzzy integrated active contour model (FIACM)-based segmentation method, a segmentation refinement method based on Parametric Mixture Model (PMM) of juxta-vascular nodules, and a knowledge-based C-SVM (Cost-sensitive Support Vector Machines) classifier, is proposed for detecting various types of pulmonary nodules in computerized tomography (CT) images. Our approach has several novel aspects: (1) In the proposed FIACM model, edge and local region information is incorporated. The fuzzy energy is used as the motivation power for the evolution of the active contour. (2) A hybrid PMM Model of juxta-vascular nodules combining appearance and geometric information is constructed for segmentation refinement of juxta-vascular nodules. Experimental results of detection for pulmonary nodules show desirable performances of the proposed method.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Bayesian Hierarchical Grouping: perceptual grouping as mixture estimation
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-01-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian Hierarchical Grouping (BHG). In BHG we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are “owned” by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz. PMID:26322548
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Tian, Dayong; Lin, Zhifen; Yin, Daqiang; Zhang, Yalei; Kong, Deyang
2012-02-01
Environmental contaminants are usually encountered as mixtures, and many of these mixtures yield synergistic or antagonistic effects attributable to an intracellular chemical reaction that pose a potential threat on ecological systems. However, how atomic charges of individual chemicals determine their intracellular chemical reactions, and then determine the joint effects for mixtures containing reactive toxicants, is not well understood. To address this issue, the joint effects between cyanogenic toxicants and aldehydes on Photobacterium phosphoreum were observed in the present study. Their toxicological joint effects differed from one another. This difference is inherently related to the two atomic charges of the individual chemicals: the oxygen charge of -CHO (O(aldehyde toxicant)) in aldehyde toxicants and the carbon-atom charge of a carbon chain in the cyanogenic toxicant (C(cyanogenic toxicant)). Based on these two atomic charges, the following QSAR (quantitative structure-activity relationship) model was proposed: When (O(aldehyde toxicant) -C(cyanogenic toxicant) )> -0.125, the joint effect of equitoxic binary mixtures at median inhibition (TU, the sum of toxic units) can be calculated as TU = 1.00 ± 0.20; when (O(aldehyde toxicant) -C(cyanogenic toxicant) ) ≤ -0.125, the joint effect can be calculated using TU = - 27.6 x O (aldehyde toxicant) - 5.22 x C (cyanogenic toxicant) - 6.97 (n = 40, r = 0.887, SE = 0.195, F = 140, p < 0.001, q(2) (Loo) = 0.748; SE is the standard error of the regression, F is the F test statistic). The result provides insight into the relationship between the atomic charges and the joint effects for mixtures containing cyanogenic toxicants and aldehydes. This demonstrates that the essence of the joint effects resulting from intracellular chemical reactions depends on the atomic charges of individual chemicals. The present study provides a possible approach for the development of a QSAR model for mixtures containing reactive toxicants based on the atomic charges. Copyright © 2011 SETAC.
A numerical program for steady-state flow of magma-gas mixtures through vertical eruptive conduits
Mastin, Larry G.; Ghiorso, Mark S.
2000-01-01
This report presents a model that calculates flow properties (pressure, vesicularity, and some 35 other parameters) as a function of vertical position within a volcanic conduit during a steady-state eruption. The model idealizes the magma-gas mixture as a single homogeneousfluid and calculates gas exsolution under the assumption of equilibrium conditions. These are the same assumptions on which classic conduit models (e.g. Wilson and Head, 1981) have been based. They are most appropriate when applied to eruptions of rapidly ascending magma (basaltic lava-fountain eruptions, and Plinian or sub-Plinian eruptions of intermediate or silicic magmas) that contains abundant nucleation sites (microlites, for example) for bubble growth.
Fish Consumption Advisories: Toward a Unified, Scientifically Credible Approach
A model is proposed for fish consumption advisories based on consensus-derived risk assessment values for common contaminants in fish and the latest risk assessment methods. he model accounts in part for the expected toxicity to mixtures of chemicals, the underlying uncertainties...
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
NASA Astrophysics Data System (ADS)
Magyar, Rudolph
2013-06-01
We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
Formulation of ionic liquid electrolyte to expand the voltage window of supercapacitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aken, Katherine L.; Beidaghi, Majid; Gogotsi, Yury
We report an effective method to expand the operating potential window (OPW) of electrochemical capacitors based on formulating the ionic liquid (IL) electrolytes. Moreover, using model electrochemical cells based on two identical onion like carbon (OLC) electrodes and two different IL electrolytes and their mixtures, it was shown that the asymmetric behavior of the electrolyte’s cation and anion toward the two electrodes limits the OPW of the cell and therefore its energy density. Additionally, a general solution to this problem is proposed by formulating the IL electrolyte mixtures to balance the capacitance of electrodes in a symmetric supercapacitor.
Formulation of Ionic-Liquid Electrolyte To Expand the Voltage Window of Supercapacitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aken, Katherine L.; Beidaghi, Majid; Gogotsi, Yury
An effective method to expand the operating potential window (OPW) of electrochemical capacitors based on formulating the ionic-liquid (IL) electrolytes is reported. Using model electrochemical cells based on two identical onion-like carbon (OLC) electrodes and two different IL electrolytes and their mixtures, it was shown that the asymmetric behavior of the electrolyte cation and anion toward the two electrodes limits the OPW of the cell and therefore its energy density. Also, a general solution to this problem is proposed by formulating the IL electrolyte mixtures to balance the capacitance of electrodes in a symmetric supercapacitor.
Formulation of ionic liquid electrolyte to expand the voltage window of supercapacitors
Van Aken, Katherine L.; Beidaghi, Majid; Gogotsi, Yury
2015-03-18
We report an effective method to expand the operating potential window (OPW) of electrochemical capacitors based on formulating the ionic liquid (IL) electrolytes. Moreover, using model electrochemical cells based on two identical onion like carbon (OLC) electrodes and two different IL electrolytes and their mixtures, it was shown that the asymmetric behavior of the electrolyte’s cation and anion toward the two electrodes limits the OPW of the cell and therefore its energy density. Additionally, a general solution to this problem is proposed by formulating the IL electrolyte mixtures to balance the capacitance of electrodes in a symmetric supercapacitor.
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Chuck; Poet, Torka S.
2008-05-01
Physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) models have been developed and validated for the organophosphorus (OP) insecticides chlorpyrifos (CPF) and diazinon (DZN). Based on similar pharmacokinetic and mode of action properties it is anticipated that these OPs could interact at a number of important metabolic steps including: CYP450 mediated activation/detoxification, and blood/tissue cholinesterase (ChE) binding/inhibition. We developed a binary PBPK/PD model for CPF, DZN and their metabolites based on previously published models for the individual insecticides. The metabolic interactions (CYP450) between CPF and DZN were evaluated in vitro and suggests that CPF is more substantially metabolized to its oxon metabolite than ismore » DZN. These data are consistent with their observed in vivo relative potency (CPF>DZN). Each insecticide inhibited the other’s in vitro metabolism in a concentration-dependent manner. The PBPK model code used to described the metabolism of CPF and DZN was modified to reflect the type of inhibition kinetics (i.e. competitive vs. non-competitive). The binary model was then evaluated against previously published rodent dosimetry and ChE inhibition data for the mixture. The PBPK/PD model simulations of the acute oral exposure to single- (15 mg/kg) vs. binary-mixtures (15+15 mg/kg) of CFP and DZN at this lower dose resulted in no differences in the predicted pharmacokinetics of either the parent OPs or their respective metabolites; whereas, a binary oral dose of CPF+DZN at 60+60 mg/kg did result in observable changes in the DZN pharmacokinetics. Cmax was more reasonably fit by modifying the absorption parameters. It is anticipated that at low environmentally relevant binary doses, most likely to be encountered in occupational or environmental related exposures, that the pharmacokinetics are expected to be linear, and ChE inhibition dose-additive.« less
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María
2009-02-01
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.
Synergy and other ineffective mixture risk definitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, R.; MacDonell, M.; Environmental Assessment
2002-04-08
A substantial effort has been spent over the past few decades to label toxicologic interaction outcomes as synergistic, antagonistic, or additive. Although useful in influencing the emotions of the public and the press, these labels have contributed fairly little to our understanding of joint toxic action. Part of the difficulty is that their underlying toxicological concepts are only defined for two chemical mixtures, while most environmental and occupational exposures are to mixtures of many more chemicals. Furthermore, the mathematical characterizations of synergism and antagonism are inextricably linked to the prevailing definition of 'no interaction,' instead of some intrinsic toxicological property.more » For example, the US EPA has selected dose addition as the no-interaction definition for mixture risk assessment, so that synergism would represent toxic effects that exceed those predicted from dose addition. For now, labels such as synergism are useful to regulatory agencies, both for qualitative indications of public health risk as well as numerical decision tools for mixture risk characterization. Efforts to quantify interaction designations for use in risk assessment formulas, however, are highly simplified and carry large uncertainties. Several research directions, such as pharmacokinetic measurements and models, and toxicogenomics, should promote significant improvements by providing multi-component data that will allow biologically based mathematical models of joint toxicity to replace these pairwise interaction labels in mixture risk assessment procedures.« less
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Lee, Eun Gyung; Slaven, James; Bowen, Russell B.; Harper, Martin
2011-01-01
The Control of Substances Hazardous to Health (COSHH) Essentials model was evaluated using full-shift exposure measurements of five chemical components in a mixture [acetone, ethylbenzene, methyl ethyl ketone, toluene, and xylenes] at a medium-sized plant producing paint materials. Two tasks, batch-making and bucket-washing, were examined. Varying levels of control were already established in both tasks and the average exposures of individual chemicals were considerably lower than the regulatory and advisory 8-h standards. The average exposure fractions using the additive mixture formula were also less than unity (batch-making: 0.25, bucket-washing: 0.56) indicating the mixture of chemicals did not exceed the combined occupational exposure limit (OEL). The paper version of the COSHH Essentials model was used to calculate a predicted exposure range (PER) for each chemical according to different levels of control. The estimated PERs of the tested chemicals for both tasks did not show consistent agreement with exposure measurements when the comparison was made for each control method and this is believed to be because of the considerably different volatilities of the chemicals. Given the combination of health hazard and exposure potential components, the COSHH Essentials model recommended a control approach ‘special advice’ for both tasks, based on the potential reproductive hazard ascribed to toluene. This would not have been the same conclusion if some other chemical had been substituted (for example styrene, which has the same threshold limit value as toluene). Nevertheless, it was special advice, which had led to the combination of hygienic procedures in place at this plant. The probability of the combined exposure fractions exceeding unity was 0.0002 for the batch-making task indicating that the employees performing this task were most likely well protected below the OELs. Although the employees involved in the bucket-washing task had greater potential to exceed the threshold limit value of the mixture (P > 1 = 0.2375), the expected personal exposure after adjusting for the assigned protection factor for the respirators in use would be considerably lower (P > 1 = 0.0161). Thus, our findings suggested that the COSHH essentials model worked reasonably well for the volatile organic chemicals at the plant. However, it was difficult to override the reproductive hazard even though it was meant to be possible in principle. Further, it became apparent that an input of existing controls, which is not possible in the web-based model, may have allowed the model be more widely applicable. The experience of using the web-based COSHH Essentials model generated some suggestions to provide a more user-friendly tool to the model users who do not have expertise in occupational hygiene. PMID:21047985
Lee, Eun Gyung; Slaven, James; Bowen, Russell B; Harper, Martin
2011-01-01
The Control of Substances Hazardous to Health (COSHH) Essentials model was evaluated using full-shift exposure measurements of five chemical components in a mixture [acetone, ethylbenzene, methyl ethyl ketone, toluene, and xylenes] at a medium-sized plant producing paint materials. Two tasks, batch-making and bucket-washing, were examined. Varying levels of control were already established in both tasks and the average exposures of individual chemicals were considerably lower than the regulatory and advisory 8-h standards. The average exposure fractions using the additive mixture formula were also less than unity (batch-making: 0.25, bucket-washing: 0.56) indicating the mixture of chemicals did not exceed the combined occupational exposure limit (OEL). The paper version of the COSHH Essentials model was used to calculate a predicted exposure range (PER) for each chemical according to different levels of control. The estimated PERs of the tested chemicals for both tasks did not show consistent agreement with exposure measurements when the comparison was made for each control method and this is believed to be because of the considerably different volatilities of the chemicals. Given the combination of health hazard and exposure potential components, the COSHH Essentials model recommended a control approach 'special advice' for both tasks, based on the potential reproductive hazard ascribed to toluene. This would not have been the same conclusion if some other chemical had been substituted (for example styrene, which has the same threshold limit value as toluene). Nevertheless, it was special advice, which had led to the combination of hygienic procedures in place at this plant. The probability of the combined exposure fractions exceeding unity was 0.0002 for the batch-making task indicating that the employees performing this task were most likely well protected below the OELs. Although the employees involved in the bucket-washing task had greater potential to exceed the threshold limit value of the mixture (P > 1 = 0.2375), the expected personal exposure after adjusting for the assigned protection factor for the respirators in use would be considerably lower (P > 1 = 0.0161). Thus, our findings suggested that the COSHH essentials model worked reasonably well for the volatile organic chemicals at the plant. However, it was difficult to override the reproductive hazard even though it was meant to be possible in principle. Further, it became apparent that an input of existing controls, which is not possible in the web-based model, may have allowed the model be more widely applicable. The experience of using the web-based COSHH Essentials model generated some suggestions to provide a more user-friendly tool to the model users who do not have expertise in occupational hygiene.
Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier
2014-01-01
Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314
Diffuse interface method for a compressible binary fluid.
Liu, Jiewei; Amberg, Gustav; Do-Quang, Minh
2016-01-01
Multicomponent, multiphase, compressible flows are very important in real life, as well as in scientific research, while their modeling is in an early stage. In this paper, we propose a diffuse interface model for compressible binary mixtures, based on the balance of mass, momentum, energy, and the second law of thermodynamics. We show both analytically and numerically that this model is able to describe the phase equilibrium for a real binary mixture (CO_{2} + ethanol is considered in this paper) very well by adjusting the parameter which measures the attraction force between molecules of the two components in the model. We also show that the calculated surface tension of the CO_{2} + ethanol mixture at different concentrations match measurements in the literature when the mixing capillary coefficient is taken to be the geometric mean of the capillary coefficient of each component. Three different cases of two droplets in a shear flow, with the same or different concentration, are simulated, showing that the higher concentration of CO_{2} the smaller the surface tension and the easier the drop deforms.
Thermal conductivity of heterogeneous mixtures and lunar soils
NASA Technical Reports Server (NTRS)
Vachon, R. I.; Prakouras, A. G.; Crane, R.; Khader, M. S.
1973-01-01
The theoretical evaluation of the effective thermal conductivity of granular materials is discussed with emphasis upon the heat transport properties of lunar soil. The following types of models are compared: probabilistic, parallel isotherm, stochastic, lunar, and a model based on nonlinear heat flow system synthesis.
Binbing Yu; Tiwari, Ram C; Feuer, Eric J
2011-06-01
Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P.; Alamdari, Houshang
2016-01-01
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch. PMID:28773459
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang
2016-05-04
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.
Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier
2011-10-01
Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Chiu, Ming-Chih; Hunt, Lisa; Resh, Vincent H
2016-12-01
Pesticide pollution from agricultural field run-off or spray drift has been documented to impact river ecosystems worldwide. However, there is limited data on short- and long-term effects of repeated pulses of pesticide mixtures on biotic assemblages in natural systems. We used reported pesticide application data as input to a hydrological fate and transport model (Soil and Water Assessment Tool) to simulate spatiotemporal dynamics of pesticides mixtures in streams on a daily time-step. We then applied regression models to explore the relationship between macroinvertebrate communities and pesticide dynamics in the Sacramento River watershed of California during 2002-2013. We found that both maximum and average pesticide toxic units were important in determining impacts on macroinvertebrates, and that the compositions of macroinvertebrates trended toward taxa having higher resilience and resistance to pesticide exposure, based on the Species at Risk pesticide (SPEAR pesticides ) index. Results indicate that risk-assessment efforts can be improved by considering both short- and long-term effects of pesticide mixtures on macroinvertebrate community composition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling sports highlights using a time-series clustering framework and model interpretation
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay
2005-01-01
In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R
2016-06-09
The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available.
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
Thrupp, Tara J; Runnalls, Tamsin J; Scholze, Martin; Kugathas, Subramaniam; Kortenkamp, Andreas; Sumpter, John P
2018-04-01
Ill-defined, multi-component mixtures of steroidal pharmaceuticals are present in the aquatic environment. Fish are extremely sensitive to some of these steroids. It is important to know how fish respond to these mixtures, and from that knowledge develop methodology that enables accurate prediction of those responses. To provide some of the data required to reach this objective, pairs of fish were first exposed to five different synthetic steroidal pharmaceuticals (one estrogen, EE2; one androgen, trenbolone; one glucocorticoid, beclomethasone dipropionate; and two progestogens, desogestrel and levonorgestrel) and concentration-response data on egg production obtained. Based on those concentration-response relationships, a five component mixture was designed and tested twice. Very similar effects were observed in the two experiments. The mixture inhibited egg production in an additive manner predicted better by the model of Independent Action than that of Concentration Addition. Our data provide a reference case for independent action in an in vivo model. A significant combined effect was observed when each steroidal pharmaceutical in the mixture was present at a concentration which on its own would produce no statistically significant effect (something from 'nothing'). Further, when each component was present in the mixture at a concentration expected to inhibit egg production by between 18% (Beclomethasone diproprionate) and 40% (trenbolone), this mixture almost completely inhibited egg production: a phenomenon we term 'a lot from a little'. The results from this proof-of-principle study suggest that multiple steroids present in the aquatic environment can be analysed for their potential combined environmental risk. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Model of Fluidized Bed Containing Reacting Solids and Gases
NASA Technical Reports Server (NTRS)
Bellan, Josette; Lathouwers, Danny
2003-01-01
A mathematical model has been developed for describing the thermofluid dynamics of a dense, chemically reacting mixture of solid particles and gases. As used here, "dense" signifies having a large volume fraction of particles, as for example in a bubbling fluidized bed. The model is intended especially for application to fluidized beds that contain mixtures of carrier gases, biomass undergoing pyrolysis, and sand. So far, the design of fluidized beds and other gas/solid industrial processing equipment has been based on empirical correlations derived from laboratory- and pilot-scale units. The present mathematical model is a product of continuing efforts to develop a computational capability for optimizing the designs of fluidized beds and related equipment on the basis of first principles. Such a capability could eliminate the need for expensive, time-consuming predesign testing.
de Bruijn, Paulien J. A.; Sabelis, Maurice W.
2010-01-01
Phytoseiulus persimilis is a predatory mite that in absence of vision relies on the detection of herbivore-induced plant odors to locate its prey, the two-spotted spider-mite Tetranychus urticae. This herbivorous prey is feeding on leaves of a wide variety of plant species in different families. The predatory mites respond to numerous structurally different compounds. However, typical spider-mite induced plant compounds do not attract more predatory mites than plant compounds not associated with prey. Because the mites are sensitive to many compounds, components of odor mixtures may affect each other’s perception. Although the response to pure compounds has been well documented, little is known how interactions among compounds affect the response to odor mixtures. We assessed the relation between the mites’ responses elicited by simple mixtures of two compounds and by the single components of these mixtures. The preference for the mixture was compared to predictions under three conceptual models, each based on one of the following assumptions: (1) the responses elicited by each of the individual components can be added to each other; (2) they can be averaged; or (3) one response overshadows the other. The observed response differed significantly from the response predicted under the additive response, average response, and overshadowing response model in 52, 36, and 32% of the experimental tests, respectively. Moreover, the behavioral responses elicited by individual compounds and their binary mixtures were determined as a function of the odor concentration. The relative contribution of each component to the behavioral response elicited by the mixture varied with the odor concentration, even though the ratio of both compounds in the mixture was kept constant. Our experiments revealed that compounds that elicited no response had an effect on the response elicited by binary mixtures that they were part of. The results are not consistent with the hypothesis that P. persimilis perceives odor mixtures as a collection of strictly elemental objects. They suggest that odor mixtures rather are perceived as one synthetic whole. Electronic supplementary material The online version of this article (doi:10.1007/s10886-010-9858-3) contains supplementary material, which is available to authorized users. PMID:20872172
NASA Technical Reports Server (NTRS)
Cole, Benjamin H.; Yang, Ping; Baum, Bryan A.; Riedi, Jerome; Labonnote, Laurent C.; Thieuleux, Francois; Platnick, Steven
2012-01-01
Insufficient knowledge of the habit distribution and the degree of surface roughness of ice crystals within ice clouds is a source of uncertainty in the forward light scattering and radiative transfer simulations required in downstream applications involving these clouds. The widely used MODerate Resolution Imaging Spectroradiometer (MODIS) Collection 5 ice microphysical model assumes a mixture of various ice crystal shapes with smooth-facets except aggregates of columns for which a moderately rough condition is assumed. When compared with PARASOL (Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) polarized reflection data, simulations of polarized reflectance using smooth particles show a poor fit to the measurements, whereas very rough-faceted particles provide an improved fit to the polarized reflectance. In this study a new microphysical model based on a mixture of 9 different ice crystal habits with severely roughened facets is developed. Simulated polarized reflectance using the new ice habit distribution is calculated using a vector adding-doubling radiative transfer model, and the simulations closely agree with the polarized reflectance observed by PARASOL. The new general habit mixture is also tested using a spherical albedo differences analysis, and surface roughening is found to improve the consistency of multi-angular observations. It is suggested that an ice model incorporating an ensemble of different habits with severely roughened surfaces would potentially be an adequate choice for global ice cloud retrievals.
Robust group-wise rigid registration of point sets using t-mixture model
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.
2016-03-01
A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
A narrow-band k-distribution model with single mixture gas assumption for radiative flows
NASA Astrophysics Data System (ADS)
Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon
2018-06-01
In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.
Exposure to mixtures is frequent, but biologic pathways such as metabolic inhibition, are poorly understood. CHCl3 and TCE are model volatiles frequently co-occurring; combined exposure results in less than additive hepatotoxicity. Here, we explore the underlying metabolic inte...
Development of PBPK Models for Gasoline in Adult and Pregnant Rats and their Fetuses
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to...
DOT National Transportation Integrated Search
2014-07-01
The formulation of constitutive equations for asphaltic pavement is based on rheological models which include the asphalt mixture, additives, and the bitumen. In terms of the asphalt, the rheology addresses the flow and permanent deformation in time,...
A Bayesian mixture model for missing data in marine mammal growth analysis
Shotwell, Mary E.; McFee, Wayne E.; Slate, Elizabeth H.
2016-01-01
Much of what is known about bottle nose dolphin (Tursiops truncatus) anatomy and physiology is based on necropsies from stranding events. Measurements of total body length, total body mass, and age are used to estimate growth. It is more feasible to retrieve and transport smaller animals for total body mass measurement than larger animals, introducing a systematic bias in sampling. Adverse weather events, volunteer availability, and other unforeseen circumstances also contribute to incomplete measurement. We have developed a Bayesian mixture model to describe growth in detected stranded animals using data from both those that are fully measured and those not fully measured. Our approach uses a shared random effect to link the missingness mechanism (i.e. full/partial measurement) to distinct growth curves in the fully and partially measured populations, thereby enabling drawing of strength for estimation. We use simulation to compare our model to complete case analysis and two common multiple imputation methods according to model mean square error. Results indicate that our mixture model provides better fit both when the two populations are present and when they are not. The feasibility and utility of our new method is demonstrated by application to South Carolina strandings data. PMID:28503080
Chen, Tingting; Kim, Choon Young; Kaur, Amandeep; Lamothe, Lisa; Shaikh, Maliha; Keshavarzian, Ali; Hamaker, Bruce R
2017-03-22
Impaired gut barrier function plays an important role in the development of many diseases such as obesity, inflammatory bowel disease, and in HIV infection. Dietary fibres have been shown to improve intestinal barrier function through their fermentation products, short chain fatty acids (SCFAs), and the effects of individual SCFAs have been studied. Here, different SCFA mixtures representing possible compositions from fibre fermentation products were studied for protective and reparative effects on intestinal barrier function. The effect of fermentation products from four dietary fibres, i.e. resistant starch, fructooligosaccharides, and sorghum and corn arabinoxylan (varying in their branched structure) on barrier function was positively correlated with their SCFA concentration. Pure SCFA mixtures of various concentrations and compositions were tested using a Caco-2 cell model. SCFAs at a moderate concentration (40-80 mM) improved barrier function without causing damage to the monolayer. In a 40 mM SCFA mixture, the butyrate proportion at 20% and 50% showed both a protective and a reparative effect on the monolayer to disrupting agents (LPS/TNF-α) applied simultaneously or prior to the SCFA mixtures. Relating this result to dietary fibre selection, slow fermenting fibres that deliver appropriate concentrations of SCFAs to the epithelium with a high proportion of butyrate may improve barrier function.
Predicting mixed-gas adsorption equilibria on activated carbon for precombustion CO2 capture.
García, S; Pis, J J; Rubiera, F; Pevida, C
2013-05-21
We present experimentally measured adsorption isotherms of CO2, H2, and N2 on a phenol-formaldehyde resin-based activated carbon, which had been previously synthesized for the separation of CO2 in a precombustion capture process. The single component adsorption isotherms were measured in a magnetic suspension balance at three different temperatures (298, 318, and 338 K) and over a large range of pressures (from 0 to 3000-4000 kPa). These values cover the temperature and pressure conditions likely to be found in a precombustion capture scenario, where CO2 needs to be separated from a CO2/H2/N2 gas stream at high pressure (~1000-1500 kPa) and with a high CO2 concentration (~20-40 vol %). Data on the pure component isotherms were correlated using the Langmuir, Sips, and dual-site Langmuir (DSL) models, i.e., a two-, three-, and four-parameter model, respectively. By using the pure component isotherm fitting parameters, adsorption equilibrium was then predicted for multicomponent gas mixtures by the extended models. The DSL model was formulated considering the energetic site-matching concept, recently addressed in the literature. Experimental gas-mixture adsorption equilibrium data were calculated from breakthrough experiments conducted in a lab-scale fixed-bed reactor and compared with the predictions from the models. Breakthrough experiments were carried out at a temperature of 318 K and five different pressures (300, 500, 1000, 1500, and 2000 kPa) where two different CO2/H2/N2 gas mixtures were used as the feed gas in the adsorption step. The DSL model was found to be the one that most accurately predicted the CO2 adsorption equilibrium in the multicomponent mixture. The results presented in this work highlight the importance of performing experimental measurements of mixture adsorption equilibria, as they are of utmost importance to discriminate between models and to correctly select the one that most closely reflects the actual process.
Kováčik, Andrej; Vogel, Alexander; Adler, Juliane; Pullmannová, Petra; Vávrová, Kateřina; Huster, Daniel
2018-05-01
In this work, we studied model stratum corneum lipid mixtures composed of the hydroxylated skin ceramides N-lignoceroyl 6-hydroxysphingosine (Cer[NH]) and α-hydroxylignoceroyl phytosphingosine (Cer[AP]). Two model skin lipid mixtures of the composition Cer[NH] or Cer[AP], N-lignoceroyl sphingosine (Cer[NS]), lignoceric acid (C24:0) and cholesterol in a 0.5:0.5:1:1 molar ratio were compared. Model membranes were investigated by differential scanning calorimetry and 2 H solid-state NMR spectroscopy at temperatures from 25 °C to 80 °C. Each component of the model mixture was specifically deuterated for selective detection by 2 H NMR. Thus, the exact phase composition of the mixture at varying temperatures could be quantified. Moreover, using X-ray powder diffraction we investigated the lamellar phase formation. From the solid-state NMR and DSC studies, we found that both hydroxylated Cer[NH] and Cer[AP] exhibit a similar phase behavior. At physiological skin temperature of 32 °C, the lipids form a crystalline (orthorhombic) phase. With increasing temperature, most of the lipids become fluid and form a liquid-crystalline phase, which converts to the isotropic phase at higher temperatures (65-80 °C). Interestingly, lignoceric acid in the Cer[NH]-containing mixture has a tendency to form two types of fluid phases at 65 °C. This tendency was also observed in Cer[AP]-containing membranes at 80 °C. While Cer[AP]-containing lipid models formed a short periodicity phase featuring a repeat spacing of d = 5.4 nm, in the Cer[NH]-based model skin lipid membranes, the formation of unusual long periodicity phase with a repeat spacing of d = 10.7 nm was observed. Copyright © 2018 Elsevier B.V. All rights reserved.
Hu, Beibei; Zhang, Xueqing; Chen, Haopeng; Cui, Daxiang
2011-03-01
We proposed a new algorithm for automatic identification of fluorescent signal. Based on the features of chromatographic chips, mathematic morphology in RGB color space was used to filter and enhance the images, pyramid connection was used to segment the areas of fluorescent signal, and then the method of Gaussian Mixture Model was used to detect the fluorescent signal. Finally we calculated the average fluorescent intensity in obtained fluorescent areas. Our results show that the algorithm has a good efficacy to segment the fluorescent areas, can detect the fluorescent signal quickly and accurately, and finally realize the quantitative detection of fluorescent signal in chromatographic chip.
Macey, Brett M.; Jenny, Matthew J.; Williams, Heidi R.; Thibodeaux, Lindy K.; Beal, Marion; Almeida, Jonas S.; Cunningham, Charles; Mancia, Annalaura; Warr, Gregory W.; Burge, Erin J.; Holland, A. Fred; Gross, Paul S.; Hikima, Sonomi; Burnett, Karen G.; Burnett, Louis; Chapman, Robert W.
2010-01-01
Heavy metals, such as copper, zinc and cadmium, represent some of the most common and serious pollutants in coastal estuaries. In the present study, we used a combination of linear and artificial neural network (ANN) modelling to detect and explore interactions among low-dose mixtures of these heavy metals and their impacts on fundamental physiological processes in tissues of the Eastern oyster, Crassostrea virginica. Animals were exposed to Cd (0.001–0.400 µM), Zn (0.001–3.059 µM) or Cu (0.002–0.787 µM), either alone or in combination for 1 to 27 days. We measured indicators of acid–base balance (hemolymph pH and total CO2), gas exchange (Po2), immunocompetence (total hemocyte counts, numbers of invasive bacteria), antioxidant status (glutathione, GSH), oxidative damage (lipid peroxidation; LPx), and metal accumulation in the gill and the hepatopancreas. Linear analysis showed that oxidative membrane damage from tissue accumulation of environmental metals was correlated with impaired acid–base balance in oysters. ANN analysis revealed interactions of metals with hemolymph acid–base chemistry in predicting oxidative damage that were not evident from linear analyses. These results highlight the usefulness of machine learning approaches, such as ANNs, for improving our ability to recognize and understand the effects of subacute exposure to contaminant mixtures. PMID:19958840
Toumi, Héla; Boumaiza, Moncef; Millet, Maurice; Radetski, Claudemir Marcos; Camara, Baba Issa; Felten, Vincent; Masfaraud, Jean-François; Férard, Jean-François
2018-04-19
We studied the combined acute effect (i.e., after 48 h) of deltamethrin (a pyrethroid insecticide) and malathion (an organophosphate insecticide) on Daphnia magna. Two approaches were used to examine the potential interaction effects of eight mixtures of deltamethrin and malathion: (i) calculation of mixture toxicity index (MTI) and safety factor index (SFI) and (ii) response surface methodology coupled with isobole-based statistical model (using generalized linear model). According to the calculation of MTI and SFI, one tested mixture was found additive while the two other tested mixtures were found no additive (MTI) or antagonistic (SFI), but these differences between index responses are only due to differences in terminology related to these two indexes. Through the surface response approach and isobologram analysis, we concluded that there was a significant antagonistic effect of the binary mixtures of deltamethrin and malathion that occurs on D. magna immobilization, after 48 h of exposure. Index approaches and surface response approach with isobologram analysis are complementary. Calculation of mixture toxicity index and safety factor index allows identifying punctually the type of interaction for several tested mixtures, while the surface response approach with isobologram analysis integrates all the data providing a global outcome about the type of interactive effect. Only the surface response approach and isobologram analysis allowed the statistical assessment of the ecotoxicological interaction. Nevertheless, we recommend the use of both approaches (i) to identify the combined effects of contaminants and (ii) to improve risk assessment and environmental management.
NASA Astrophysics Data System (ADS)
Xie, M.; Agus, S. S.; Schanz, T.; Kolditz, O.
2004-12-01
This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright
Wang, Yunpeng; Thompson, Wesley K.; Schork, Andrew J.; Holland, Dominic; Chen, Chi-Hua; Bettella, Francesco; Desikan, Rahul S.; Li, Wen; Witoelar, Aree; Zuber, Verena; Devor, Anna; Nöthen, Markus M.; Rietschel, Marcella; Chen, Qiang; Werge, Thomas; Cichon, Sven; Weinberger, Daniel R.; Djurovic, Srdjan; O’Donovan, Michael; Visscher, Peter M.; Andreassen, Ole A.; Dale, Anders M.
2016-01-01
Most of the genetic architecture of schizophrenia (SCZ) has not yet been identified. Here, we apply a novel statistical algorithm called Covariate-Modulated Mixture Modeling (CM3), which incorporates auxiliary information (heterozygosity, total linkage disequilibrium, genomic annotations, pleiotropy) for each single nucleotide polymorphism (SNP) to enable more accurate estimation of replication probabilities, conditional on the observed test statistic (“z-score”) of the SNP. We use a multiple logistic regression on z-scores to combine information from auxiliary information to derive a “relative enrichment score” for each SNP. For each stratum of these relative enrichment scores, we obtain nonparametric estimates of posterior expected test statistics and replication probabilities as a function of discovery z-scores, using a resampling-based approach that repeatedly and randomly partitions meta-analysis sub-studies into training and replication samples. We fit a scale mixture of two Gaussians model to each stratum, obtaining parameter estimates that minimize the sum of squared differences of the scale-mixture model with the stratified nonparametric estimates. We apply this approach to the recent genome-wide association study (GWAS) of SCZ (n = 82,315), obtaining a good fit between the model-based and observed effect sizes and replication probabilities. We observed that SNPs with low enrichment scores replicate with a lower probability than SNPs with high enrichment scores even when both they are genome-wide significant (p < 5x10-8). There were 693 and 219 independent loci with model-based replication rates ≥80% and ≥90%, respectively. Compared to analyses not incorporating relative enrichment scores, CM3 increased out-of-sample yield for SNPs that replicate at a given rate. This demonstrates that replication probabilities can be more accurately estimated using prior enrichment information with CM3. PMID:26808560
NASA Astrophysics Data System (ADS)
Errington, Jeffrey Richard
This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.
A New Model for Simulating Gas Metal Arc Welding based on Phase Field Model
NASA Astrophysics Data System (ADS)
Jiang, Yongyue; Li, Li; Zhao, Zhijiang
2017-11-01
Lots of physical process, such as metal melting, multiphase fluids flow, heat and mass transfer and thermocapillary effect (Marangoni) and so on, will occur in gas metal arc welding (GMAW) which should be considered as a mixture system. In this paper, based on the previous work, we propose a new model to simulate GMAW including Navier-Stokes equation, the phase field model and energy equation. Unlike most previous work, we take the thermocapillary effect into the phase field model considering mixture energy which is different of volume of fluid method (VOF) widely used in GMAW before. We also consider gravity, electromagnetic force, surface tension, buoyancy effect and arc pressure in momentum equation. The spray transfer especially the projected transfer in GMAW is computed as numerical examples with a continuous finite element method and a modified midpoint scheme. Pulse current is set as welding current as the numerical example to show the numerical simulation of metal transfer which fits the theory of GMAW well. From the result compared with the data of high-speed photography and VOF model, the accuracy and stability of the model and scheme are easily validated and also the new model has the higher precieion.
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-01-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne
2010-01-01
Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
NASA Astrophysics Data System (ADS)
Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.
2016-01-01
A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
CFD Modeling of Helium Pressurant Effects on Cryogenic Tank Pressure Rise Rates in Normal Gravity
NASA Technical Reports Server (NTRS)
Grayson, Gary; Lopez, Alfredo; Chandler, Frank; Hastings, Leon; Hedayat, Ali; Brethour, James
2007-01-01
A recently developed computational fluid dynamics modeling capability for cryogenic tanks is used to simulate both self-pressurization from external heating and also depressurization from thermodynamic vent operation. Axisymmetric models using a modified version of the commercially available FLOW-3D software are used to simulate actual physical tests. The models assume an incompressible liquid phase with density that is a function of temperature only. A fully compressible formulation is used for the ullage gas mixture that contains both condensable vapor and a noncondensable gas component. The tests, conducted at the NASA Marshall Space Flight Center, include both liquid hydrogen and nitrogen in tanks with ullage gas mixtures of each liquid's vapor and helium. Pressure and temperature predictions from the model are compared to sensor measurements from the tests and a good agreement is achieved. This further establishes the accuracy of the developed FLOW-3D based modeling approach for cryogenic systems.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
A New Self-Consistent Field Model of Polymer/Nanoparticle Mixture
NASA Astrophysics Data System (ADS)
Chen, Kang; Li, Hui-Shu; Zhang, Bo-Kai; Li, Jian; Tian, Wen-De
2016-02-01
Field-theoretical method is efficient in predicting assembling structures of polymeric systems. However, it’s challenging to generalize this method to study the polymer/nanoparticle mixture due to its multi-scale nature. Here, we develop a new field-based model which unifies the nanoparticle description with the polymer field within the self-consistent field theory. Instead of being “ensemble-averaged” continuous distribution, the particle density in the final morphology can represent individual particles located at preferred positions. The discreteness of particle density allows our model to properly address the polymer-particle interface and the excluded-volume interaction. We use this model to study the simplest system of nanoparticles immersed in the dense homopolymer solution. The flexibility of tuning the interfacial details allows our model to capture the rich phenomena such as bridging aggregation and depletion attraction. Insights are obtained on the enthalpic and/or entropic origin of the structural variation due to the competition between depletion and interfacial interaction. This approach is readily extendable to the study of more complex polymer-based nanocomposites or biology-related systems, such as dendrimer/drug encapsulation and membrane/particle assembly.
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures
NASA Astrophysics Data System (ADS)
Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.
2017-10-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.
ERIC Educational Resources Information Center
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk
2008-01-01
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
Connolly, John; Sebastià, Maria-Teresa; Kirwan, Laura; Finn, John Anthony; Llurba, Rosa; Suter, Matthias; Collins, Rosemary P; Porqueddu, Claudio; Helgadóttir, Áslaug; Baadshaug, Ole H; Bélanger, Gilles; Black, Alistair; Brophy, Caroline; Čop, Jure; Dalmannsdóttir, Sigridur; Delgado, Ignacio; Elgersma, Anjo; Fothergill, Michael; Frankow-Lindberg, Bodil E; Ghesquiere, An; Golinski, Piotr; Grieu, Philippe; Gustavsson, Anne-Maj; Höglind, Mats; Huguenin-Elie, Olivier; Jørgensen, Marit; Kadziuliene, Zydre; Lunnan, Tor; Nykanen-Kurki, Paivi; Ribas, Angela; Taube, Friedhelm; Thumm, Ulrich; De Vliegher, Alex; Lüscher, Andreas
2018-03-01
Grassland diversity can support sustainable intensification of grassland production through increased yields, reduced inputs and limited weed invasion. We report the effects of diversity on weed suppression from 3 years of a 31-site continental-scale field experiment.At each site, 15 grassland communities comprising four monocultures and 11 four-species mixtures based on a wide range of species' proportions were sown at two densities and managed by cutting. Forage species were selected according to two crossed functional traits, "method of nitrogen acquisition" and "pattern of temporal development".Across sites, years and sown densities, annual weed biomass in mixtures and monocultures was 0.5 and 2.0 t DM ha -1 (7% and 33% of total biomass respectively). Over 95% of mixtures had weed biomass lower than the average of monocultures, and in two-thirds of cases, lower than in the most suppressive monoculture (transgressive suppression). Suppression was significantly transgressive for 58% of site-years. Transgressive suppression by mixtures was maintained across years, independent of site productivity.Based on models, average weed biomass in mixture over the whole experiment was 52% less (95% confidence interval: 30%-75%) than in the most suppressive monoculture. Transgressive suppression of weed biomass was significant at each year across all mixtures and for each mixture.Weed biomass was consistently low across all mixtures and years and was in some cases significantly but not largely different from that in the equiproportional mixture. The average variability (standard deviation) of annual weed biomass within a site was much lower for mixtures (0.42) than for monocultures (1.77). Synthesis and applications . Weed invasion can be diminished through a combination of forage species selected for complementarity and persistence traits in systems designed to reduce reliance on fertiliser nitrogen. In this study, effects of diversity on weed suppression were consistently strong across mixtures varying widely in species' proportions and over time. The level of weed biomass did not vary greatly across mixtures varying widely in proportions of sown species. These diversity benefits in intensively managed grasslands are relevant for the sustainable intensification of agriculture and, importantly, are achievable through practical farm-scale actions.
Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation
Peter, Adrian M.; Rangarajan, Anand
2010-01-01
Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497
Raman spectroscopy and imaging to detect contaminants for food safety applications
NASA Astrophysics Data System (ADS)
Chao, Kuanglin; Qin, Jianwei; Kim, Moon S.; Peng, Yankun; Chan, Diane; Cheng, Yu-Che
2013-05-01
This study presents the use of Raman chemical imaging for the screening of dry milk powder for the presence of chemical contaminants and Raman spectroscopy for quantitative assessment of chemical contaminants in liquid milk. For image-based screening, melamine was mixed into dry milk at concentrations (w/w) between 0.2% and 10.0%, and images of the mixtures were analyzed by a spectral information divergence algorithm. Ammonium sulfate, dicyandiamide, and urea were each separately mixed into dry milk at concentrations (w/w) between 0.5% and 5.0%, and an algorithm based on self-modeling mixture analysis was applied to these sample images. The contaminants were successfully detected and the spatial distribution of the contaminants within the sample mixtures was visualized using these algorithms. Liquid milk mixtures were prepared with melamine at concentrations between 0.04% and 0.30%, with ammonium sulfate and with urea at concentrations between 0.1% and 10.0%, and with dicyandiamide at concentrations between 0.1% and 4.0%. Analysis of the Raman spectra from the liquid mixtures showed linear relationships between the Raman intensities and the chemical concentrations. Although further studies are necessary, Raman chemical imaging and spectroscopy show promise for use in detecting and evaluating contaminants in food ingredients.
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Theory of anomalous critical-cluster content in high-pressure binary nucleation.
Kalikmanov, V I; Labetski, D G
2007-02-23
Nucleation experiments in binary (a-b) mixtures, when component a is supersaturated and b (carrier gas) is undersaturated, reveal that for some mixtures at high pressures the a content of the critical cluster dramatically decreases with pressure contrary to expectations based on classical nucleation theory. We show that this phenomenon is a manifestation of the dominant role of the unlike interactions at high pressures resulting in the negative partial molar volume of component a in the vapor phase beyond the compensation pressure. The analysis is based on the pressure nucleation theorem for multicomponent systems which is invariant to a nucleation model.
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw
2014-02-15
We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less
A Mixture Modeling Framework for Differential Analysis of High-Throughput Data
Taslim, Cenny; Lin, Shili
2014-01-01
The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284
Performance of Raphidocelis subcapitata exposed to heavy metal mixtures.
Expósito, Nora; Kumar, Vikas; Sierra, Jordi; Schuhmacher, Marta; Giménez Papiol, Gemma
2017-12-01
Microalgae growth inhibition assays are candidates for referent ecotoxicological assays, and are a fundamental part in the strategy to reduce the use of fish and other animal models in aquatic toxicology. In the present work, the performance of Raphidocelis subcapitata exposed to heavy metals following standardized growth inhibition assays has been assessed in three different scenarios: 1) dilutions of single heavy metals, 2) artificial mixture of heavy metals at similar levels than those found in natural rivers and, 3) natural samples containing known mixtures of contaminants (heavy metals). Chemical speciation of heavy metals has been estimated with Eh-pH diagram and Visual MINTEQ software; heavy metal and free heavy metal ion concentrations were used as input data, together with microalgae growth inhibition, for Dr. Fit software. The final goal was to assess the suitability of the ecotoxicological test based on the growth inhibition of microalgae cultures, and the mathematic models based on these results, for regulatory and decision-making purposes. The toxicity of a given heavy metal is not only determined by its chemical speciation; other chemical and biological interaction play an important role in the final toxicity. Raphidocelis subcapitata 48h-h-EC50 for tested heavy metals (especially Cu and Zn) were in agreement with previous studies, when ion metal bioavailability was assumed to be 100%. Nevertheless, the calculated growth inhibition was not in agreement with the obtained inhibition when exposed to the artificial mixture of heavy metals or the natural sample. Interactions between heavy metal ions and the compounds of the culture media and/or the natural sample determine heavy metal bioavailability, and eventually their toxicity. More research is needed for facing the challenge posed by pollutant mixtures as they are present in natural environments, and make microalgae-based assays suitable for pollution management and regulatory purposes. Copyright © 2017 Elsevier B.V. All rights reserved.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Bromaghin, Jeffrey F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.
2011-01-01
Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length‐adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large‐mesh gill‐net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size‐selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock‐specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error. The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length‐adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted to study any observable trait or condition that may vary among populations.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
NASA Astrophysics Data System (ADS)
Bhowmik, S.; Stoop, M.; Krishnamurthy, R.
2017-07-01
Based on the reality of "prebiotic clutter," we herein present an alternate model for pre-RNA to RNA transition, which starts, not with homogeneous-backbone system, but rather with mixtures of heterogeneous-backbone of chimeric "pre-RNA/RNA."
A Volume-Fraction Based Two-Phase Constitutive Model for Blood
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Rui; Massoudi, Mehrdad; Hund, S.J.
2008-06-01
Mechanically-induced blood trauma such as hemolysis and thrombosis often occurs at microscopic channels, steps and crevices within cardiovascular devices. A predictive mathematical model based on a broad understanding of hemodynamics at micro scale is needed to mitigate these effects, and is the motivation of this research project. Platelet transport and surface deposition is important in thrombosis. Microfluidic experiments have previously revealed a significant impact of red blood cell (RBC)-plasma phase separation on platelet transport [5], whereby platelet localized concentration can be enhanced due to a non-uniform distribution of RBCs of blood flow in a capillary tube and sudden expansion. However,more » current platelet deposition models either totally ignored RBCs in the fluid by assuming a zero sample hematocrit or treated them as being evenly distributed. As a result, those models often underestimated platelet advection and deposition to certain areas [2]. The current study aims to develop a two-phase blood constitutive model that can predict phase separation in a RBC-plasma mixture at the micro scale. The model is based on a sophisticated theory known as theory of interacting continua, i.e., mixture theory. The volume fraction is treated as a field variable in this model, which allows the prediction of concentration as well as velocity profiles of both RBC and plasma phases. The results will be used as the input of successive platelet deposition models.« less
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Risk assessment of occupational exposure to heavy metal mixtures: a study protocol.
Omrane, Fatma; Gargouri, Imed; Khadhraoui, Moncef; Elleuch, Boubaker; Zmirou-Navier, Denis
2018-03-05
Sfax is a very industrialized city located in the southern region of Tunisia where heavy metals (HMs) pollution is now an established matter of fact. The health of its residents mainly those engaged in industrial metals-based activities is under threat. Indeed, such workers are being exposed to a variety of HMs mixtures, and this exposure has cumulative properties. Whereas current HMs exposure assessment is mainly carried out using direct air monitoring approaches, the present study aims to assess health risks associated with chronic occupational exposure to HMs in industry, using a modeling approach that will be validated later on. To this end, two questionnaires were used. The first was an identification/descriptive questionnaire aimed at identifying, for each company: the specific activities, materials used, manufactured products and number of employees exposed. The second related to the job-task of the exposed persons, workplace characteristics (dimensions, ventilation, etc.), type of metals and emission configuration in space and time. Indoor air HMs concentrations were predicted, based on the mathematical models generally used to estimate occupational exposure to volatile substances (such as solvents). Later on, and in order to validate the adopted model, air monitoring will be carried out, as well as some biological monitoring aimed at assessing HMs excretion in the urine of workers volunteering to participate. Lastly, an interaction-based hazard index HI int and a decision support tool will be used to predict the cumulative risk assessment for HMs mixtures. One hundred sixty-one persons working in the 5 participating companies have been identified. Of these, 110 are directly engaged with HMs in the course of the manufacturing process. This model-based prediction of occupational exposure represents an alternative tool that is both time-saving and cost-effective in comparison with direct air monitoring approaches. Following validation of the different models according to job processes, via comparison with direct measurements and exploration of correlations with biological monitoring, these estimates will allow a cumulative risk characterization.
Detailed finite element method modeling of evaporating multi-component droplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diddens, Christian, E-mail: C.Diddens@tue.nl
The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet.more » Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.« less
Style consistent classification of isogenous patterns.
Sarkar, Prateek; Nagy, George
2005-01-01
In many applications of pattern recognition, patterns appear together in groups (fields) that have a common origin. For example, a printed word is usually a field of character patterns printed in the same font. A common origin induces consistency of style in features measured on patterns. The features of patterns co-occurring in a field are statistically dependent because they share the same, albeit unknown, style. Style constrained classifiers achieve higher classification accuracy by modeling such dependence among patterns in a field. Effects of style consistency on the distributions of field-features (concatenation of pattern features) can be modeled by hierarchical mixtures. Each field derives from a mixture of styles, while, within a field, a pattern derives from a class-style conditional mixture of Gaussians. Based on this model, an optimal style constrained classifier processes entire fields of patterns rendered in a consistent but unknown style. In a laboratory experiment, style constrained classification reduced errors on fields of printed digits by nearly 25 percent over singlet classifiers. Longer fields favor our classification method because they furnish more information about the underlying style.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
Selby-Pham, Sophie N B; Howell, Kate S; Dunshea, Frank R; Ludbey, Joel; Lutz, Adrian; Bennett, Louise
2018-04-15
A diet rich in phytochemicals confers benefits for health by reducing the risk of chronic diseases via regulation of oxidative stress and inflammation (OSI). For optimal protective bio-efficacy, the time required for phytochemicals and their metabolites to reach maximal plasma concentrations (T max ) should be synchronised with the time of increased OSI. A statistical model has been reported to predict T max of individual phytochemicals based on molecular mass and lipophilicity. We report the application of the model for predicting the absorption profile of an uncharacterised phytochemical mixture, herein referred to as the 'functional fingerprint'. First, chemical profiles of phytochemical extracts were acquired using liquid chromatography mass spectrometry (LC-MS), then the molecular features for respective components were used to predict their plasma absorption maximum, based on molecular mass and lipophilicity. This method of 'functional fingerprinting' of plant extracts represents a novel tool for understanding and optimising the health efficacy of plant extracts. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Osipov, Viatcheslav; Muratov, Cyrill; Hafiychuk, Halyna; Ponizovskya-Devine, Ekaterina; Smelyanskiy, Vadim; Mathias, Donovan; Lawrence, Scott; Werkheiser, Mary
2011-01-01
We analyze the data of purposeful rupture experiments with LOx and LH2 tanks, the Hydrogen-Oxygen Vertical Impact (HOVI) tests that were performed to clarify the ignition mechanisms, the explosive power of cryogenic H2/Ox mixtures under different conditions, and to elucidate the puzzling source of the initial formation of flames near the intertank section during the Challenger disaster. We carry out a physics-based analysis of general explosions scenarios for cryogenic gaseous H2/Ox mixtures and determine their realizability conditions, using the well-established simplified models from the detonation and deflagration theory. We study the features of aerosol H2/Ox mixture combustion and show, in particular, that aerosols intensify the deflagration flames and can induce detonation for any ignition mechanism. We propose a cavitation-induced mechanism of self-ignition of cryogenic H2/Ox mixtures that may be realized when gaseous H2 and Ox flows are mixed with a liquid Ox turbulent stream, as occurred in all HOVI tests. We present an overview of the HOVI tests to make conclusion on the risk of strong explosions in possible liquid rocket incidents and provide a semi-quantitative interpretation of the HOVI data based on aerosol combustion. We uncover the most dangerous situations and discuss the foreseeable risks which can arise in space missions and lead to tragic outcomes. Our analysis relates to only unconfined mixtures that are likely to arise as a result of liquid propellant space vehicle incidents.
NASA Astrophysics Data System (ADS)
Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn
2018-03-01
Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer mixture regions. And after the spray flames gets quasi-steady, most heat is released at the stoichiometric mixture fraction regions. In addition, combustion mode analysis based on key intermediate species illustrates three-mode combustion processes in diesel spray flames.
Pumpable/injectable phosphate-bonded ceramics
Singh, Dileep; Wagh, Arun S.; Perry, Lamar; Jeong, Seung-Young
2001-01-01
A pumpable ceramic composition is provided comprising an inorganic oxide, potassium phosphate, and an oxide coating material. Also provided is a method for preparing pumpable ceramic-based waste forms comprising selecting inorganic oxides based on solubility, surface area and morphology criteria; mixing the selected oxides with phosphate solution and waste to form a first mixture; combining an additive to the first mixture to create a second mixture; adding water to the second mixture to create a reactive mixture; homogenizing the reactive mixture; and allowing the reactive mixture to cure.
Keromnes, Alan; Metcalfe, Wayne K.; Heufer, Karl A.; ...
2013-03-12
The oxidation of syngas mixtures was investigated experimentally and simulated with an updated chemical kinetic model. Ignition delay times for H 2/CO/O 2/N 2/Ar mixtures have been measured using two rapid compression machines and shock tubes at pressures from 1 to 70 bar, over a temperature range of 914–2220 K and at equivalence ratios from 0.1 to 4.0. Results show a strong dependence of ignition times on temperature and pressure at the end of the compression; ignition delays decrease with increasing temperature, pressure, and equivalence ratio. The reactivity of the syngas mixtures was found to be governed by hydrogen chemistrymore » for CO concentrations lower than 50% in the fuel mixture. For higher CO concentrations, an inhibiting effect of CO was observed. Flame speeds were measured in helium for syngas mixtures with a high CO content and at elevated pressures of 5 and 10 atm using the spherically expanding flame method. A detailed chemical kinetic mechanism for hydrogen and H 2/CO (syngas) mixtures has been updated, rate constants have been adjusted to reflect new experimental information obtained at high pressures and new rate constant values recently published in the literature. Experimental results for ignition delay times and flame speeds have been compared with predictions using our newly revised chemical kinetic mechanism, and good agreement was observed. In the mechanism validation, particular emphasis is placed on predicting experimental data at high pressures (up to 70 bar) and intermediate- to high-temperature conditions, particularly important for applications in internal combustion engines and gas turbines. The reaction sequence H 2 + HO˙ 2 ↔ H˙+H 2O 2 followed by H 2O 2(+M) ↔ O˙H+O˙H(+M) was found to play a key role in hydrogen ignition under high-pressure and intermediate-temperature conditions. The rate constant for H 2+HO˙ 2 showed strong sensitivity to high-pressure ignition times and has considerable uncertainty, based on literature values. As a result, a rate constant for this reaction is recommended based on available literature values and on our mechanism validation.« less
Sun, Haoyu; Pan, Yongzheng; Gu, Yue; Lin, Zhifen
2018-07-15
Cross-phenomenon in which the concentration-response curve (CRC) for a mixture crosses the CRC for the reference model has been identified in many studies, expressed as a heterogeneous pattern of joint toxic action. However, a mechanistic explanation of the cross-phenomenon has thus far been extremely insufficient. In this study, a time-dependent cross-phenomenon was observed, in which the cross-concentration range between the CRC for the mixture of sulfamethoxypyridazine (SMP) and (Z-)-4-Bromo-5-(bromomethylene)-2(5H)-furanone (C30) to the bioluminescence of Aliivibrio fischeri (A. fischeri) and the CRC for independent action model with 95% confidence bands varied from low-concentration to higher-concentration regions in a timely manner expressed the joint toxic action of the mixture changing with an increase of both concentration and time. Through investigating the time-dependent hormetic effects of SMP and C30 (by measuring the expression of protein mRNA, simulating the bioluminescent reaction and analyzing the toxic action), the underlying mechanism was as follows: SMP and C30 acted on the quorum sensing (QS) system of A. fischeri, which induced low-concentration stimulatory effects and high-concentration inhibitory effects; in the low-concentration region, the stimulatory effects of SMP and C30 made the mixture produce a synergistic stimulation on the bioluminescence; thus, the joint toxic action exhibited antagonism. In the high-concentration region, the inhibitory effects of SMP and C30 in the mixture caused a double block in the loop circuit of the QS system; thus, the joint toxic action exhibited synergism. With the increase of time, these stimulatory and inhibitory effects of SMP and C30 were changed by the variation of the QS system at different growth phases, resulting in the time-dependent cross-phenomenon. This study proposes an induced mechanism for time-dependent cross-phenomenon based on QS, which may provide new insight into the mechanistic investigation of time-dependent cross-phenomenon, benefitting the environmental risk assessment of mixtures. Copyright © 2018 Elsevier B.V. All rights reserved.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
In situ gas analysis for high pressure applications using property measurements
NASA Astrophysics Data System (ADS)
Moeller, J.; Span, R.; Fieback, T.
2013-10-01
As the production, distribution, and storage of renewable energy based fuels usually are performed under high pressures and as there is a lack of in situ high pressure gas analysis instruments on the market, the aim of this work was to develop a method for in situ high pressure gas analysis of biogas and hydrogen containing gas mixtures. The analysis is based on in situ measurements of optical, thermo physical, and electromagnetic properties in gas mixtures with newly developed high pressure sensors. This article depicts the calculation of compositions from the measured properties, which is carried out iteratively by using highly accurate equations of state for gas mixtures. The validation of the method consisted of the generation and measurement of several mixtures, of which three are presented herein: a first mixture of 64.9 mol. % methane, 17.1 mol. % carbon dioxide, 9 mol. % helium, and 9 mol. % ethane at 323 K and 423 K in a pressure range from 2.5 MPa to 17 MPa; a second mixture of 93.0 mol. % methane, 4.0 mol. % propane, 2.0 mol. % carbon dioxide, and 1.0 mol. % nitrogen at 303 K, 313 K, and 323 K in a pressure range from 1.2 MPa to 3 MPa; and a third mixture of 64.9 mol. % methane, 30.1 mol. % carbon dioxide, and 5.0 mol. % nitrogen at 303 K, 313 K, and 323 K in a pressure range from 2.5 MPa to 4 MPa. The analysis of the tested gas mixtures showed that with measured density, velocity of sound, and relative permittivity the composition can be determined with deviations below 1.9 mol. %, in most cases even below 1 mol. %. Comparing the calculated compositions with the generated gas mixture, the deviations were in the range of the combined uncertainty of measurement and property models.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Process Dissociation and Mixture Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2008-01-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images
NASA Astrophysics Data System (ADS)
Yao, Shoukui; Qin, Xiaojuan
2018-02-01
Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.
Chiappini, Massimiliano; Eiser, Erika; Sciortino, Francesco
2017-01-01
A new gel-forming colloidal system based on a binary mixture of fd-viruses and gold nanoparticles functionalized with complementary DNA single strands has been recently introduced. Upon quenching below the DNA melt temperature, such a system results in a highly porous gel state, that may be developed in a new functional material of tunable porosity. In order to shed light on the gelation mechanism, we introduce a model closely mimicking the experimental one and we explore via Monte Carlo simulations its equilibrium phase diagram. Specifically, we model the system as a binary mixture of hard rods and hard spheres mutually interacting via a short-range square-well attractive potential. In the experimental conditions, we find evidence of a phase separation occurring either via nucleation-and-growth or via spinodal decomposition. The spinodal decomposition leads to the formation of small clusters of bonded rods and spheres whose further diffusion and aggregation leads to the formation of a percolating network in the system. Our results are consistent with the hypothesis that the mixture of DNA-coated fd-viruses and gold nanoparticles undergoes a non-equilibrium gelation via an arrested spinodal decomposition mechanism.
de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...
2016-05-03
We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal
2017-04-01
The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.
Khezri, Abdolrahman; Fraser, Thomas W. K.; Nourizadeh-Lillabadi, Rasoul; Kamstra, Jorke H.; Berg, Vidar; Zimmer, Karin E.; Ropstad, Erik
2017-01-01
Persistent organic pollutants (POPs) are widespread in the environment and some may be neurotoxic. As we are exposed to complex mixtures of POPs, we aimed to investigate how a POP mixture based on Scandinavian human blood data affects behaviour and neurodevelopment during early life in zebrafish. Embryos/larvae were exposed to a series of sub-lethal doses and behaviour was examined at 96 h post fertilization (hpf). In order to determine the sensitivity window to the POP mixture, exposure models of 6 to 48 and 48 to 96 hpf were used. The expression of genes related to neurological development was also assessed. Results indicate that the POP mixture increases the swimming speed of larval zebrafish following exposure between 48 to 96 hpf. This behavioural effect was associated with the perfluorinated compounds, and more specifically with perfluorooctanesulfonic acid (PFOS). The expression of genes related to the stress response, GABAergic, dopaminergic, histaminergic, serotoninergic, cholinergic systems and neuronal maintenance, were altered. However, there was little overlap in those genes that were significantly altered by the POP mixture and PFOS. Our findings show that the POP mixture and PFOS can have a similar effect on behaviour, yet alter the expression of genes relevant to neurological development differently. PMID:28146072
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Allie-Ebrahim, Tariq; Zhu, Qingyu; Bräuer, Pierre; Moggridge, Geoff D; D'Agostino, Carmine
2017-06-21
The Maxwell-Stefan model is a popular diffusion model originally developed to model diffusion of gases, which can be considered thermodynamically ideal mixtures, although its application has been extended to model diffusion in non-ideal liquid mixtures as well. A drawback of the model is that it requires the Maxwell-Stefan diffusion coefficients, which are not based on measurable quantities but they have to be estimated. As a result, numerous estimation methods, such as the Darken model, have been proposed to estimate these diffusion coefficients. However, the Darken model was derived, and is only well defined, for binary systems. This model has been extended to ternary systems according to two proposed forms, one by R. Krishna and J. M. van Baten, Ind. Eng. Chem. Res., 2005, 44, 6939-6947 and the other by X. Liu, T. J. H. Vlugt and A. Bardow, Ind. Eng. Chem. Res., 2011, 50, 10350-10358. In this paper, the two forms have been analysed against the ideal ternary system of methanol/butan-1-ol/propan-1-ol and using experimental values of self-diffusion coefficients. In particular, using pulsed gradient stimulated echo nuclear magnetic resonance (PGSTE-NMR) we have measured the self-diffusion coefficients in various methanol/butan-1-ol/propan-1-ol mixtures. The experimental values of self-diffusion coefficients were then used as the input data required for the Darken model. The predictions of the two proposed multicomponent forms of this model were then compared to experimental values of mutual diffusion coefficients for the ideal alcohol ternary system. This experimental-based approach showed that the Liu's model gives better predictions compared to that of Krishna and van Baten, although it was only accurate to within 26%. Nonetheless, the multicomponent Darken model in conjunction with self-diffusion measurements from PGSTE-NMR represents an attractive method for a rapid estimation of mutual diffusion in multicomponent systems, especially when compared to exhaustive MD simulations.
Biogas and methane yield in response to co- and separate digestion of biomass wastes.
Adelard, Laetitia; Poulsen, Tjalfe G; Rakotoniaina, Volana
2015-01-01
The impact of co-digestion as opposed to separate digestion, on biogas and methane yield (apparent synergetic effects) was investigated for three biomass materials (pig manure, cow manure and food waste) under mesophilic conditions over a 36 day period. In addition to the three biomass materials (digested separately), 13 biomass mixtures (co-digested) were used. Two approaches for modelling biogas and methane yield during co-digestion, based on volatile solids concentration and ultimate gas and methane potentials, were evaluated. The dependency of apparent synergetic effects on digestion time and biomass mixture composition was further assessed using measured cumulative biogas and methane yields and specific biogas and methane generation rates. Results indicated that it is possible, based on known volatile solids concentration and ultimate biogas or methane yields for a set of biomass materials digested separately, to accurately estimate gas yields for biomass mixtures made from these materials using calibrated models. For the biomass materials considered here, modelling indicated that the addition of pig manure is the main cause of synergetic effects. Co-digestion generally resulted in improved ultimate biogas and methane yields compared to separate digestion. Biogas and methane production was furthermore significantly higher early (0-7 days) and to some degree also late (above 20 days) in the digestion process during co-digestion. © The Author(s) 2014.
Theoretical Thermodynamics of Mixtures at High Pressures
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1985-01-01
The development of an understanding of the chemistry of mixtures of metallic hydrogen and abundant, higher-z material such as oxygen, carbon, etc., is important for understanding of fundamental processes of energy release, differentiation, and development of atmospheric abundances in the Jovian planets. It provides a significant theoretical base for the interpretation of atmospheric elemental abundances to be provided by atmospheric entry probes in coming years. Significant differences are found when non-perturbative approaches such as Thomas-Fermi-Dirac (TFD) theory are used. Mapping of the phase diagrams of such binary mixtures in the pressure range from approx. 10 Mbar to approx. 1000 Mbar, using results from three-dimensional TFD calculations is undertaken. Derivation of a general and flexible thermodynamic model for such binary mixtures in the relevant pressure range was facilitated by the following breakthrough: there exists an accurate nd fairly simple thermodynamic representation of a liquid two-component plasma (TCP) in which the Helmholtz free energy is represented as a suitable linear combination of terms dependent only on density and terms which depend only on the ion coupling parameter. It is found that the crystal energies of mixtures of H-He, H-C, and H-O can be satisfactorily reproduced by the same type of model, except that an effective, density-dependent ionic charge must be used in place of the actual total ionic charge.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es
2013-06-01
Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
2011-01-01
Abstract Background The combinatorial library strategy of using multiple candidate ligands in mixtures as library members is ideal in terms of cost and efficiency, but needs special screening methods to estimate the affinities of candidate ligands in such mixtures. Herein, a new method to screen candidate ligands present in unknown molar quantities in mixtures was investigated. Results The proposed method involves preparing a processed-mixture-for-screening (PMFS) with each mixture sample and an exogenous reference ligand, initiating competitive binding among ligands from the PMFS to a target immobilized on magnetic particles, recovering target-ligand complexes in equilibrium by magnetic force, extracting and concentrating bound ligands, and analyzing ligands in the PMFS and the concentrated extract by chromatography. The relative affinity of each candidate ligand to its reference ligand is estimated via an approximation equation assuming (a) the candidate ligand and its reference ligand bind to the same site(s) on the target, (b) their chromatographic peak areas are over five times their intercepts of linear response but within their linear ranges, (c) their binding ratios are below 10%. These prerequisites are met by optimizing primarily the quantity of the target used and the PMFS composition ratio. The new method was tested using the competitive binding of biotin derivatives from mixtures to streptavidin immobilized on magnetic particles as a model. Each mixture sample containing a limited number of candidate biotin derivatives with moderate differences in their molar quantities were prepared via parallel-combinatorial-synthesis (PCS) without purification, or via the pooling of individual compounds. Some purified biotin derivatives were used as reference ligands. This method showed resistance to variations in chromatographic quantification sensitivity and concentration ratios; optimized conditions to validate the approximation equation could be applied to different mixture samples. Relative affinities of candidate biotin derivatives with unknown molar quantities in each mixture sample were consistent with those estimated by a homogenous method using their purified counterparts as samples. Conclusions This new method is robust and effective for each mixture possessing a limited number of candidate ligands whose molar quantities have moderate differences, and its integration with PCS has promise to routinely practice the mixture-based library strategy. PMID:21545719
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guevara-Carrion, Gabriela; Janzen, Tatjana; Muñoz-Muñoz, Y. Mauricio
Mutual diffusion coefficients of all 20 binary liquid mixtures that can be formed out of methanol, ethanol, acetone, benzene, cyclohexane, toluene, and carbon tetrachloride without a miscibility gap are studied at ambient conditions of temperature and pressure in the entire composition range. The considered mixtures show a varying mixing behavior from almost ideal to strongly non-ideal. Predictive molecular dynamics simulations employing the Green-Kubo formalism are carried out. Radial distribution functions are analyzed to gain an understanding of the liquid structure influencing the diffusion processes. It is shown that cluster formation in mixtures containing one alcoholic component has a significant impactmore » on the diffusion process. The estimation of the thermodynamic factor from experimental vapor-liquid equilibrium data is investigated, considering three excess Gibbs energy models, i.e., Wilson, NRTL, and UNIQUAC. It is found that the Wilson model yields the thermodynamic factor that best suits the simulation results for the prediction of the Fick diffusion coefficient. Four semi-empirical methods for the prediction of the self-diffusion coefficients and nine predictive equations for the Fick diffusion coefficient are assessed and it is found that methods based on local composition models are more reliable. Finally, the shear viscosity and thermal conductivity are predicted and in most cases favorably compared with experimental literature values.« less
Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands.
Lenoir-Wijnkoop, I; van Aalderen, W M C; Boehm, G; Klaassen, D; Sprikkelman, A B; Nuijten, M J C
2012-02-01
The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic preventive disease management of atopic dermatitis. Data sources used include published literature, clinical trials and official price/tariff lists and national population statistics. The comparator was no supplementation with prebiotics. The primary perspective for conducting the economic evaluation was based on the situation in The Netherlands in 2009. The results show that the use of prebiotics infant formula (IMMUNOFORTIS(®)) leads to an additional cost of € 51 and an increase in Quality Adjusted Life Years (QALY) of 0.108, when compared with no prebiotics. Consequently, the use of infant formula with a specific mixture of prebiotics results in an incremental cost-effectiveness ratio (ICER) of € 472. The sensitivity analyses show that the ICER remains in all analyses far below the threshold of € 20,000/QALY. This study shows that the favourable health benefit of the use of a specific mixture of prebiotics results in positive short- and long-term health economic benefits. In addition, this study demonstrates that the use of infant formula with a specific mixture of prebiotics is a highly cost-effective way of preventing atopic dermatitis in The Netherlands.
Transient thermohydraulic heat pipe modeling
NASA Astrophysics Data System (ADS)
Hall, Michael L.; Doster, Joseph M.
Many space based reactor designs employ heat pipes as a means of conveying heat. In these designs, thermal radiation is the principle means for rejecting waste heat from the reactor system, making it desirable to operate at high temperatures. Lithium is generally the working fluid of choice as it undergoes a liquid-vapor transformation at the preferred operating temperature. The nature of remote startup, restart, and reaction to threats necessitates an accurate, detailed transient model of the heat pipe operation. A model is outlined of the vapor core region of the heat pipe which is part of a large model of the entire heat pipe thermal response. The vapor core is modeled using the area averaged Navier-Stokes equations in one dimension, which take into account the effects of mass, energy and momentum transfer. The core model is single phase (gaseous), but contains two components: lithium gas and a noncondensible vapor. The vapor core model consists of the continuity equations for the mixture and noncondensible, as well as mixture equations for internal energy and momentum.
MARMOT Phase-Field Model for the U-Si System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagesen, Larry Kenneth; Schwen, Daniel
2016-09-01
A phase-field model for the U-Si system has been implemented in MARMOT. The free energies for the phases relevant to accident-tolerant fuel applications (U 3Si 2, USi, U 3Si, and liquid) were implemented as free energy materials within MARMOT. A new three-phase phase-field model based on the concepts of the Kim-Kim-Suzuki two-phase model was developed and implemented in the MOOSE phase-field module. Key features of this model are that two-phase interfaces are stable with respect to formation of the third phase, and that arbitrary phase free energies can be used. The model was validated using a simplified three-phase system andmore » the U-Si system. In the U-Si system, the model correctly reproduced three-phase coexistence in a U 3Si 2-liquid-USi system at the eutectic temperature, solidification of a three-phase mixture below the eutectic temperature, and complete melting of a three-phase mixture above the eutectic temperature.« less
Screening level mixture risk assessment of pharmaceuticals in STP effluents.
Backhaus, Thomas; Karlsson, Maja
2014-02-01
We modeled the ecotoxicological risks of the pharmaceutical mixtures emitted from STP effluents into the environment. The classic mixture toxicity concept of Concentration Addition was used to calculate the total expected risk of the analytically determined mixtures, compare the expected impact of seven effluent streams and pinpoint the most sensitive group of species. The risk quotient of a single, randomly selected pharmaceutical is often more than a factor of 1000 lower than the mixture risk, clearly indicating the need to systematically analyse the overall risk of all pharmaceuticals present. The MCR, which is the ratio between the most risky compound and the total mixture risk, varies between 1.2 and 4.2, depending on the actual scenario and species group under consideration. The mixture risk quotients, based on acute data and an assessment factor of 1000, regularly exceed 1, indicating a potential risk for the environment, depending on the dilution in the recipient stream. The top 10 mixture components explain more than 95% of the mixture risk in all cases. A mixture toxicity assessment cannot go beyond the underlying single substance data. The lack of data on the chronic toxicity of most pharmaceuticals as well as the very few data available for in vivo fish toxicity has to be regarded as a major knowledge gap in this context. On the other hand, ignoring Independent Action or even using the sum of individual risk quotients as a rough approximation of Concentration Addition does not have a major impact on the final risk estimate. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
Effects of three veterinary antibiotics and their binary mixtures on two green alga species.
Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A
2018-03-01
The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Parikh, H. M.; Carlton, A. G.; Zhang, H.; Kamens, R.; Vizuete, W.
2011-12-01
Secondary organic aerosol (SOA) is simulated for 6 outdoor smog chamber experiments using a SOA model based on a kinetic chemical mechanism in conjunction with a volatility basis set (VBS) approach. The experiments include toluene, a non-SOA-forming hydrocarbon mixture, diesel exhaust or meat cooking emissions and NOx, and are performed under varying conditions of relative humidity. SOA formation from toluene is modeled using a condensed kinetic aromatic mechanism that includes partitioning of lumped semi-volatile products in particle organic-phase and incorporates particle aqueous-phase chemistry to describe uptake of glyoxal and methylglyoxal. Modeling using the kinetic mechanism alone, along with primary organic aerosol (POA) from diesel exhaust (DE) /meat cooking (MC) fails to simulate the rapid SOA formation at the beginning hours of the experiments. Inclusion of a VBS approach with the kinetic mechanism to characterize the emissions and chemistry of complex mixture of intermediate volatility organic compounds (IVOCs) from DE/MC, substantially improves SOA predictions when compared with observed data. The VBS model includes photochemical aging of IVOCs and evaporation of POA after dilution. The relative contribution of SOA mass from DE/MC is as high as 95% in the morning, but substantially decreases after mid-afternoon. For high humidity experiments, aqueous-phase SOA fraction dominates the total SOA mass at the end of the day (approximately 50%). In summary, the combined kinetic and VBS approach provides a new and improved framework to semi-explicitly model SOA from VOC precursors in conjunction with a VBS approach that can be used on complex emission mixtures comprised with hundreds of individual chemical species.
Kong, Fanhui; Chen, Yeh-Fong
2016-07-01
By examining the outcome trajectories of the dropout patients with different reasons in the schizophrenia trials, we note that although patients are recruited from the same protocol that have compatible baseline characteristics, they may respond differently even to the same treatment. Some patients show consistent improvement while others only have temporary relief. This creates different patient subpopulations characterized by their response and dropout patterns. At the same time, those who continue to improve seem to be more likely to complete the study while those who only experience temporary relief have a higher chance to drop out. Such phenomenon appears to be quite general in schizophrenia clinical trials. This simultaneous inhomogeneity both in patient response as well as dropout patterns creates a scenario of missing not at random and therefore results in biases when we use the statistical methods based on the missing at random assumption to test treatment efficacy. In this paper, we propose to use the latent class growth mixture model, which is a special case of the latent mixture model, to conduct the statistical analyses in such situation. This model allows us to take the inhomogeneity among subpopulations into consideration to make more accurate inferences on the treatment effect at any visit time. Comparing with the conventional statistical methods such as mixed-effects model for repeated measures, we demonstrate through simulations that the proposed latent mixture model approach gives better control on the Type I error rate in testing treatment effect. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Copyright © 2016 John Wiley & Sons, Ltd.
Coronado, M; Segadães, A M; Andrés, A
2015-12-15
This work describes the leaching behavior of potentially hazardous metals from three different clay-based industrial ceramic products (wall bricks, roof tiles, and face bricks) containing foundry sand dust and Waelz slag as alternative raw materials. For each product, ten mixtures were defined by mixture design of experiments and the leaching of As, Ba, Cd, Cr, Cu, Mo, Ni, Pb, and Zn was evaluated in pressed specimens fired simulating the three industrial ceramic processes. The results showed that, despite the chemical, mineralogical and processing differences, only chrome and molybdenum were not fully immobilized during ceramic processing. Their leaching was modeled as polynomial equations, functions of the raw materials contents, and plotted as response surfaces. This brought to evidence that Cr and Mo leaching from the fired products is not only dependent on the corresponding contents and the basicity of the initial mixtures, but is also clearly related with the mineralogical composition of the fired products, namely the amount of the glassy phase, which depends on both the major oxides contents and the firing temperature. Copyright © 2015 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
2015-03-01
Mixture proportioning is routinely a matter of using a recipe based on a previously produced concrete, rather than adjusting the : proportions based on the needs of the mixture and the locally available materials. As budgets grow tighter and increasi...
Diversifying mechanisms in the on-farm evolution of crop mixtures.
Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle
2015-06-01
While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.
Mooneyham, T.; Jeyaratnam, J.; Schultz, T. W.; Pöch, G.
2011-01-01
Four ethyl α-halogenated acetates were tested in (1) sham and (2) nonsham combinations and (3) with a nonreactive nonpolar narcotic. Ethyl iodoacetate (EIAC), ethyl bromoacetate (EBAC), ethyl chloroacetate (ECAC), and ethyl fluoroacetate (EFAC), each considered to be an SN2-H-polar soft electrophile, were selected for testing based on their differences in electro(nucleo)philic reactivity and time-dependent toxicity (TDT). Agent reactivity was assessed using the model nucleophile glutathione, with EIAC and EBAC showing rapid reactivity, ECAC being less reactive, and EFAC lacking reactivity at ≤250 mM. The model nonpolar narcotic, 3-methyl-2-butanone (3M2B), was not reactive. Toxicity of the agents alone and in mixture was assessed using the Microtox acute toxicity test at three exposure durations: 15, 30 and 45 min. Two of the agents alone (EIAC and EBAC) had TDT values >100%. In contrast, ECAC (74 to 99%) and EFAC (9 to 12%) had partial TDT, whereas 3M2B completely lacked TDT (<0%). In mixture testing, sham combinations of each agent showed a combined effect consistent with predicted effects for dose-addition at each time point, as judged by EC50 dose-addition quotient values. Mixture toxicity results for nonsham ethyl acetate combinations were variable, with some mixtures being inconsistent with the predicted effects for dose-addition and/or independence. The ethyl acetate–3M2B combinations were somewhat more toxic than predicted for dose-addition, a finding differing from that observed previously for α-halogenated acetonitriles with 3M2B. PMID:21452006
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Fusion and Gaussian mixture based classifiers for SONAR data
NASA Astrophysics Data System (ADS)
Kotari, Vikas; Chang, KC
2011-06-01
Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.
Mazurek, Monica A
2002-12-01
This article describes a chemical characterization approach for complex organic compound mixtures associated with fine atmospheric particles of diameters less than 2.5 m (PM2.5). It relates molecular- and bulk-level chemical characteristics of the complex mixture to atmospheric chemistry and to emission sources. Overall, the analytical approach describes the organic complex mixtures in terms of a chemical mass balance (CMB). Here, the complex mixture is related to a bulk elemental measurement (total carbon) and is broken down systematically into functional groups and molecular compositions. The CMB and molecular-level information can be used to understand the sources of the atmospheric fine particles through conversion of chromatographic data and by incorporation into receptor-based CMB models. Once described and quantified within a mass balance framework, the chemical profiles for aerosol organic matter can be applied to existing air quality issues. Examples include understanding health effects of PM2.5 and defining and controlling key sources of anthropogenic fine particles. Overall, the organic aerosol compositional data provide chemical information needed for effective PM2.5 management.
Generation of two-dimensional binary mixtures in complex plasmas
NASA Astrophysics Data System (ADS)
Wieben, Frank; Block, Dietmar
2016-10-01
Complex plasmas are an excellent model system for strong coupling phenomena. Under certain conditions the dust particles immersed into the plasma form crystals which can be analyzed in terms of structure and dynamics. Previous experiments focussed mostly on monodisperse particle systems whereas dusty plasmas in nature and technology are polydisperse. Thus, a first and important step towards experiments in polydisperse systems are binary mixtures. Recent experiments on binary mixtures under microgravity conditions observed a phase separation of particle species with different radii even for small size disparities. This contradicts several numerical studies of 2D binary mixtures. Therefore, dedicated experiments are required to gain more insight into the physics of polydisperse systems. In this contribution first ground based experiments on two-dimensional binary mixtures are presented. Particular attention is paid to the requirements for the generation of such systems which involve the consideration of the temporal evolution of the particle properties. Furthermore, the structure of these two-component crystals is analyzed and compared to simulations. This work was supported by the Deutsche Forschungsgemeinschaft DFG in the framework of the SFB TR24 Greifswald Kiel, Project A3b.
High Enthalpy Effects on Two Boundary Layer Disturbances in Supersonic and Hypersonic Flow
2012-05-01
Reshotko[37], and Reda[ 73 ]. These reviews discuss how a number of different flow features and geometry can affect the transition location including the...MODELS 35 The species enthalpy is defined as hs = cvsT + Ps ρs + evs + h◦s = cpsT + evs + h ◦ s, where cps is the specific heat at constant pressure of...derived from the Lewis number, which is Le = κ ρcpD , where cp and κ are based on the gas mixture. The mixture value of cp is determined using a mass
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Wang, Quan-Ying; Sun, Jing-Yue; Xu, Xing-Jian; Yu, Hong-Wen
2018-06-20
Because the extensive use of Cu-based fungicides, the accumulation of Cu in agricultural soil has been widely reported. However, little information is known about the bioavailability of Cu deriving from different fungicides in soil. This paper investigated both the distribution behaviors of Cu from two commonly used fungicides (Bordeaux mixture and copper oxychloride) during the aging process and the toxicological effects of Cu on earthworms. Copper nitrate was selected as a comparison during the aging process. The distribution process of exogenous Cu into different soil fractions involved an initial rapid retention (the first 8 weeks) and a following slow continuous retention. Moreover, Cu mainly moved from exchangeable and carbonate fractions to Fe-Mn oxides-combined fraction during the aging process. The Elovich model fit well with the available Cu aging process, and the transformation rate was in the order of Cu(NO 3 ) 2 > Bordeaux mixture > copper oxychloride. On the other hand, the biological responses of earthworms showed that catalase activities and malondialdehyde contents of the copper oxychloride treated earthworms were significantly higher than those of Bordeaux mixture treated earthworms. Also, body Cu loads of earthworms from different Cu compounds spiked soils were in the following order: copper oxychloride > Bordeaux mixture. Thus, the bioavailability of Cu from copper oxychloride in soil was significantly higher than that of Bordeaux mixture, and different Cu compounds should be taken into consideration when studying the bioavailability of Cu-based fungicides in the soil. Copyright © 2018 Elsevier Inc. All rights reserved.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation
NASA Astrophysics Data System (ADS)
Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter
2016-11-01
Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.
Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong
2012-09-01
We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Some comments on thermodynamic consistency for equilibrium mixture equations of state
Grove, John W.
2018-03-28
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van
2014-09-15
Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment ismore » achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.« less
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
Consideration of some dilute-solution phenomena based on an expression for the Gibbs free energy
NASA Astrophysics Data System (ADS)
Jonah, D. A.
1986-07-01
Rigorous expressions based on the Lennard-Jones (6 12) potential, are presented for the Gibbs and Helmholtz free energy of a dilute mixture. These expressions give the free energy of the mixture in terms of the thermodynamic properties of the pure solvent, thereby providing a convenient means of correlating dilute mixture behavior with that of the pure solvent. Expressions for the following dilute binary solution properties are derived: Henry's constant, limiting activity coefficients with their derivatives, solid solubilities in supercritical gases, and mixed second virial coefficients. The Henry's constant expression suggests a linear temperature dependence; application to solubility data for various gases in methane and water shows a good agreement between theory and experiment. In the thermodynamic modeling of supercritical fluid extraction, we have demonstrated how to predict new solubility-pressure isotherms from a given isotherm, with encouraging results. The mixed second virial coefficient expression has also been applied to experimental data; the agreement with theory is good.
Batch anaerobic digestion of synthetic military base food waste and cardboard mixtures.
Asato, Caitlin M; Gonzalez-Estrella, Jorge; Jerke, Amber C; Bang, Sookie S; Stone, James J; Gilcrease, Patrick C
2016-09-01
Austere US military bases typically dispose of solid wastes, including large fractions of food waste (FW) and corrugated cardboard (CCB), by open dumping, landfilling, or burning. Anaerobic digestion (AD) offers an opportunity to reduce pollution and recover useful energy. This study aimed to evaluate the rates and yields of AD for FW-CCB mixtures. Batch AD was analyzed at substrate concentrations of 1-50g total chemical oxygen demand (COD)L(-1) using response surface methodology. At low concentrations, higher proportions of FW were correlated with faster specific methanogenic activities and greater final methane yields; however, concentrations of FW ⩾18.75gCODL(-1) caused inhibition. Digestion of mixtures with ⩾75% CCB occurred slowly but achieved methane yields >70%. Greater shifts in microbial communities were observed at higher substrate concentrations. Statistical models of methane yield and specific methanogenic activity indicated that FW and CCB exhibited no considerable interactions as substrates for AD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Efficient implicit LES method for the simulation of turbulent cavitating flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan
2016-07-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong
2014-06-01
Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)
Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-11-01
Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Prior-knowledge-based spectral mixture analysis for impervious surface mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jinshui; He, Chunyang; Zhou, Yuyu
2014-01-03
In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, lowmore » albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.« less
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Chemical Mixture Risk Assessment Additivity-Based Approaches
Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
Predicting the shock compression response of heterogeneous powder mixtures
NASA Astrophysics Data System (ADS)
Fredenburg, D. A.; Thadhani, N. N.
2013-06-01
A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
ERIC Educational Resources Information Center
Henderson, Craig E.; Dakof, Gayle A.; Greenbaum, Paul E.; Liddle, Howard A.
2010-01-01
Objective: We used growth mixture modeling to examine heterogeneity in treatment response in a secondary analysis of 2 randomized controlled trials testing multidimensional family therapy (MDFT), an established evidence-based therapy for adolescent drug abuse and delinquency. Method: The first study compared 2 evidence-based adolescent substance…
Qosa, Hisham; LeVine, Harry; Keller, Jeffrey N; Kaddoumi, Amal
2014-09-01
Senile amyloid plaques are one of the diagnostic hallmarks of Alzheimer's disease (AD). However, the severity of clinical symptoms of AD is weakly correlated with the plaque load. AD symptoms severity is reported to be more strongly correlated with the level of soluble amyloid-β (Aβ) assemblies. Formation of soluble Aβ assemblies is stimulated by monomeric Aβ accumulation in the brain, which has been related to its faulty cerebral clearance. Studies tend to focus on the neurotoxicity of specific Aβ species. There are relatively few studies investigating toxic effects of Aβ on the endothelial cells of the blood-brain barrier (BBB). We hypothesized that a soluble Aβ pool more closely resembling the in vivo situation composed of a mixture of Aβ40 monomer and Aβ42 oligomer would exert higher toxicity against hCMEC/D3 cells as an in vitro BBB model than either component alone. We observed that, in addition to a disruptive effect on the endothelial cells integrity due to enhancement of the paracellular permeability of the hCMEC/D3 monolayer, the Aβ mixture significantly decreased monomeric Aβ transport across the cell culture model. Consistent with its effect on Aβ transport, Aβ mixture treatment for 24h resulted in LRP1 down-regulation and RAGE up-regulation in hCMEC/D3 cells. The individual Aβ species separately failed to alter Aβ clearance or the cell-based BBB model integrity. Our study offers, for the first time, evidence that a mixture of soluble Aβ species, at nanomolar concentrations, disrupts endothelial cells integrity and its own transport across an in vitro model of the BBB. Copyright © 2014 Elsevier B.V. All rights reserved.
A numerical study of granular dam-break flow
NASA Astrophysics Data System (ADS)
Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.
2017-12-01
Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.