Process Dissociation and Mixture Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2008-01-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…
Process dissociation and mixture signal detection theory.
DeCarlo, Lawrence T
2008-11-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.
NASA Astrophysics Data System (ADS)
Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.
2017-03-01
We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Numerical study of underwater dispersion of dilute and dense sediment-water mixtures
NASA Astrophysics Data System (ADS)
Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.
2018-05-01
As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Yu, Kezi; Quirk, J Gerald; Djurić, Petar M
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models
Yu, Kezi; Quirk, J. Gerald
2017-01-01
In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927
On Boiling of Crude Oil under Elevated Pressure
NASA Astrophysics Data System (ADS)
Pimenova, Anastasiya V.; Goldobin, Denis S.
2016-02-01
We construct a thermodynamic model for theoretical calculation of the boiling process of multicomponent mixtures of hydrocarbons (e.g., crude oil). The model governs kinetics of the mixture composition in the course of the distillation process along with the boiling temperature increase. The model heavily relies on the theory of dilute solutions of gases in liquids. Importantly, our results are applicable for modelling the process under elevated pressure (while the empiric models for oil cracking are not scalable to the case of extreme pressure), such as in an oil field heated by lava intrusions.
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
Prediction of the properties anhydrite construction mixtures based on neural network approach
NASA Astrophysics Data System (ADS)
Fedorchuk, Y. M.; Zamyatin, N. V.; Smirnov, G. V.; Rusina, O. N.; Sadenova, M. A.
2017-08-01
The article considered the question of applying the backstop modeling mechanism from the components of anhydride mixtures in the process of managing the technological processes of receiving construction products which based on fluoranhydrite.
The nonlinear model for emergence of stable conditions in gas mixture in force field
NASA Astrophysics Data System (ADS)
Kalutskov, Oleg; Uvarova, Liudmila
2016-06-01
The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Adsorption Processes of Lead Ions on the Mixture Surface of Bentonite and Bottom Sediments.
Hegedűsová, Alžbeta; Hegedűs, Ondrej; Tóth, Tomáš; Vollmannová, Alena; Andrejiová, Alena; Šlosár, Miroslav; Mezeyová, Ivana; Pernyeszi, Tímea
2016-12-01
The adsorption of contaminants plays an important role in the process of their elimination from a polluted environment. This work describes the issue of loading environment with lead Pb(II) and the resulting negative impact it has on plants and living organisms. It also focuses on bentonite as a natural adsorbent and on the adsorption process of Pb(II) ions on the mixture of bentonite and bottom sediment from the water reservoir in Kolíňany (SR). The equilibrium and kinetic experimental data were evaluated using Langmuir isotherm kinetic pseudo-first and pseudo-second-order rate equations the intraparticle and surface diffusion models. Langmuir isotherm model was successfully used to characterize the lead ions adsorption equilibrium on the mixture of bentonite and bottom sediment. The pseudo second-order model, the intraparticle and surface (film) diffusion models could be simultaneously fitted the experimental kinetic data.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Nielsen, J D; Dean, C B
2008-09-01
A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.
López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina
2017-05-05
In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Zhili; Shneider, Mikhail N.
2010-03-15
This paper presents the experimental measurement and computational model of sodium plasma decay processes in mixture of sodium and argon by using radar resonance-enhanced multiphoton ionization (REMPI), coherent microwave Rayleigh scattering of REMPI. A single laser beam resonantly ionizes the sodium atoms by means of 2+1 REMPI process. The laser beam can only generate the ionization of the sodium atoms and have negligible ionization of argon. Coherent microwave scattering in situ measures the total electron number in the laser-induced plasma. Since the sodium ions decay by recombination with electrons, microwave scattering directly measures the plasma decay processes of the sodiummore » ions. A theoretical plasma dynamic model, including REMPI of the sodium and electron avalanche ionization (EAI) of sodium and argon in the gas mixture, has been developed. It confirms that the EAI of argon is several orders of magnitude lower than the REMPI of sodium. The theoretical prediction made for the plasma decay process of sodium plasma in the mixture matches the experimental measurement.« less
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
Modeling of active transmembrane transport in a mixture theory framework.
Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T
2010-05-01
This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.
Estimating Lion Abundance using N-mixture Models for Social Species
Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.
2016-01-01
Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283
Estimating Lion Abundance using N-mixture Models for Social Species.
Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E
2016-10-27
Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework
Aslam, Muhammad; Tahir, Muhammad; Hussain, Zawar; Al-Zahrani, Bander
2015-01-01
To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given. PMID:25993475
M. M. Clark; T. H. Fletcher; R. R. Linn
2010-01-01
The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixtureâ fraction model relying on thermodynamic...
NASA Astrophysics Data System (ADS)
Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn
2018-03-01
Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer mixture regions. And after the spray flames gets quasi-steady, most heat is released at the stoichiometric mixture fraction regions. In addition, combustion mode analysis based on key intermediate species illustrates three-mode combustion processes in diesel spray flames.
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
Premixed flame propagation in combustible particle cloud mixtures
NASA Technical Reports Server (NTRS)
Seshadri, K.; Yang, B.
1993-01-01
The structures of premixed flames propagating in combustible systems, containing uniformly distributed volatile fuel particles, in an oxidizing gas mixtures is analyzed. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi(u) is substantially larger than unity. A model is developed to explain these experimental observations. In the model it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. It is shown that the interplay of vaporization kinetics and oxidation process, can result in steady flame propagation in combustible mixtures where the value of phi(u) is substantially larger than unity. This prediction is in agreement with experimental observations.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Weaker Ligands Can Dominate an Odor Blend due to Syntopic Interactions
2013-01-01
Most odors in natural environments are mixtures of several compounds. Perceptually, these can blend into a new “perfume,” or some components may dominate as elements of the mixture. In order to understand such mixture interactions, it is necessary to study the events at the olfactory periphery, down to the level of single-odorant receptor cells. Does a strong ligand present at a low concentration outweigh the effect of weak ligands present at high concentrations? We used the fruit fly receptor dOr22a and a banana-like odor mixture as a model system. We show that an intermediate ligand at an intermediate concentration alone elicits the neuron’s blend response, despite the presence of both weaker ligands at higher concentration, and of better ligands at lower concentration in the mixture. Because all of these components, when given alone, elicited significant responses, this reveals specific mixture processing already at the periphery. By measuring complete dose–response curves we show that these mixture effects can be fully explained by a model of syntopic interaction at a single-receptor binding site. Our data have important implications for how odor mixtures are processed in general, and what preprocessing occurs before the information reaches the brain. PMID:23315042
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
Functional mixture regression.
Yao, Fang; Fu, Yuejiao; Lee, Thomas C M
2011-04-01
In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.
NASA Astrophysics Data System (ADS)
Gulliver, Eric A.
The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.
Silva-Fernandes, Talita; Duarte, Luís Chorão; Carvalheiro, Florbela; Loureiro-Dias, Maria Conceição; Fonseca, César; Gírio, Francisco
2015-05-01
This work studied the processing of biomass mixtures containing three lignocellulosic materials largely available in Southern Europe, eucalyptus residues (ER), wheat straw (WS) and olive tree pruning (OP). The mixtures were chemically characterized, and their pretreatment, by autohydrolysis, evaluated within a severity factor (logR0) ranging from 1.73 up to 4.24. A simple modeling strategy was used to optimize the autohydrolysis conditions based on the chemical characterization of the liquid fraction. The solid fraction was characterized to quantify the polysaccharide and lignin content. The pretreatment conditions for maximal saccharides recovery in the liquid fraction were at a severity range (logR0) of 3.65-3.72, independently of the mixture tested, which suggests that autohydrolysis can effectively process mixtures of lignocellulosic materials for further biochemical conversion processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Constituent bioconcentration in rainbow trout exposed to a complex chemical mixture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linder, G.; Bergman, H.L.; Meyer, J.S.
1984-09-01
Classically, aquatic contaminant fate models predicting a chemical's bioconcentration factor (BCF) are based upon single-compound derived models, yet such BCF predictions may deviate from observed BCFs when physicochemical interactions or biological responses to complex chemical mixture exposures are not adequately considered in the predictive model. Rainbow trout were exposed to oil-shale retort waters. Such a study was designed to model the potential biological effects precluded by exposure to complex chemical mixtures such as solid waste leachates, agricultural runoff, and industrial process waste waters. Chromatographic analysis of aqueous and nonaqueous liquid-liquid reservoir components yielded differences in mixed extraction solvent HPLC profilesmore » of whole fish exposed for 1 and 3 weeks to the highest dilution of the complex chemical mixture when compared to their corresponding control, yet subsequent whole fish extractions at 6, 9, 12, and 15 weeks into exposure demonstrated no qualitative differences between control and exposed fish. Liver extractions and deproteinized bile samples from exposed fish were qualitatively different than their corresponding controls. These findings support the projected NOEC of 0.0045% dilution, even though the differences in bioconcentration profiles suggest hazard assessment strategies may be useful in evaluating environmental fate processes associated with complex chemical mixtures. 12 references, 4 figures, 2 tables.« less
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Compact determination of hydrogen isotopes
Robinson, David
2017-04-06
Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-01
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
NASA Astrophysics Data System (ADS)
Lim, Jun-Wei; Beh, Hoe-Guan; Ching, Dennis Ling Chuan; Ho, Yeek-Chia; Baloo, Lavania; Bashir, Mohammed J. K.; Wee, Seng-Kew
2017-11-01
The present study provides an insight into the optimization of a glucose and sucrose mixture to enhance the denitrification process. Central Composite Design was applied to design the batch experiments with the factors of glucose and sucrose measured as carbon-to-nitrogen (C:N) ratio each and the response of percentage removal of nitrate-nitrogen (NO3 --N). Results showed that the polynomial regression model of NO3 --N removal had been successfully derived, capable of describing the interactive relationships of glucose and sucrose mixture that influenced the denitrification process. Furthermore, the presence of glucose was noticed to have more consequential effect on NO3 --N removal as opposed to sucrose. The optimum carbon sources mixture to achieve complete removal of NO3 --N required lesser glucose (C:N ratio of 1.0:1.0) than sucrose (C:N ratio of 2.4:1.0). At the optimum glucose and sucrose mixture, the activated sludge showed faster acclimation towards glucose used to perform the denitrification process. Later upon the acclimation with sucrose, the glucose uptake rate by the activated sludge abated. Therefore, it is vital to optimize the added carbon sources mixture to ensure the rapid and complete removal of NO3 --N via the denitrification process.
Mixture optimization for mixed gas Joule-Thomson cycle
NASA Astrophysics Data System (ADS)
Detlor, J.; Pfotenhauer, J.; Nellis, G.
2017-12-01
An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.
Comparing single- and dual-process models of memory development.
Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert
2017-11-01
This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Hunnicutt, Sally S.; Grushow, Alexander; Whitnell, Rob
2017-01-01
The principles of process-oriented guided inquiry learning (POGIL) are applied to a binary solid-liquid mixtures experiment. Over the course of two learning cycles, students predict, measure, and model the phase diagram of a mixture of fatty acids. The enthalpy of fusion of each fatty acid is determined from the results. This guided inquiry…
NASA Astrophysics Data System (ADS)
Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.
2017-12-01
Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos
2011-09-01
The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.
Baldovin-Stella stochastic volatility process and Wiener process mixtures
NASA Astrophysics Data System (ADS)
Peirano, P. P.; Challet, D.
2012-08-01
Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.
Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-05
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
Locatelli, Fernando F; Fernandez, Patricia C; Villareal, Francis; Muezzinoglu, Kerem; Huerta, Ramon; Galizia, C. Giovanni; Smith, Brian H.
2012-01-01
Experience related plasticity is an essential component of networks involved in early olfactory processing. However, the mechanisms and functions of plasticity in these neural networks are not well understood. We studied nonassociative plasticity by evaluating responses to two pure odors (A and X) and their binary mixture using calcium imaging of odor elicited activity in output neurons of the honey bee antennal lobe. Unreinforced exposure to A or X produced no change in the neural response elicited by the pure odors. However, exposure to one odor (e.g. A) caused the response to the mixture to become more similar to the other component (X). We also show in behavioral analyses that unreinforced exposure to A caused the mixture to become perceptually more similar to X. These results suggest that nonassociative plasticity modifies neural networks in such a way that it affects local competitive interactions among mixture components. We used a computational model to evaluate the most likely targets for modification. Hebbian modification of synapses from inhibitory local interneurons to projection neurons most reliably produces the observed shift in response to the mixture. These results are consistent with a model in which the antennal lobe acts to filter olfactory information according to its relevance for performing a particular task. PMID:23167675
NASA Astrophysics Data System (ADS)
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Dunne, Lawrence J; Manos, George
2018-03-13
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO 2 and CH 4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO 2 and CH 4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Manos, George
2018-03-01
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO2 and CH4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO2 and CH4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes. This article is part of the theme issue `Modern theoretical chemistry'.
Parameters modelling of amaranth grain processing technology
NASA Astrophysics Data System (ADS)
Derkanosova, N. M.; Shelamova, S. A.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.
2018-03-01
The article presents a technique that allows calculating the structure of a multicomponent bakery mixture for the production of enriched products, taking into account the instability of nutrient content, and ensuring the fulfilment of technological requirements and, at the same time considering consumer preferences. The results of modelling and analysis of optimal solutions are given by the example of calculating the structure of a three-component mixture of wheat and rye flour with an enriching component, that is, whole-hulled amaranth flour applied to the technology of bread from a mixture of rye and wheat flour on a liquid leaven.
Hyperspectral target detection using heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2009-09-01
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.
Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.
Rabosky, Daniel L
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.
Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees
Rabosky, Daniel L.
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858
A numerical study of granular dam-break flow
NASA Astrophysics Data System (ADS)
Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.
2017-12-01
Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.
Spectral mixture modeling: Further analysis of rock and soil types at the Viking Lander sites
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.
1987-01-01
A new image processing technique was applied to Viking Lander multispectral images. Spectral endmembers were defined that included soil, rock and shade. Mixtures of these endmembers were found to account for nearly all the spectral variance in a Viking Lander image.
Redman, Aaron D; Parkerton, Thomas F; Butler, Josh David; Letinski, Daniel J; Frank, Richard A; Hewitt, L Mark; Bartlett, Adrienne J; Gillis, Patricia Leigh; Marentette, Julie R; Parrott, Joanne L; Hughes, Sarah A; Guest, Rodney; Bekele, Asfaw; Zhang, Kun; Morandi, Garrett; Wiseman, Steve B; Giesy, John P
2018-06-14
Oil sand operations in Alberta, Canada will eventually include returning treated process-affected waters to the environment. Organic constituents in oil sand process-affected water (OSPW) represent complex mixtures of nonionic and ionic (e.g. naphthenic acids) compounds, and compositions can vary spatially and temporally, which has impeded development of water quality benchmarks. To address this challenge, it was hypothesized that solid phase microextraction fibers coated with polydimethylsiloxane (PDMS) could be used as a biomimetic extraction (BE) to measure bioavailable organics in OSPW. Organic constituents of OSPW were assumed to contribute additively to toxicity, and partitioning to PDMS was assumed to be predictive of accumulation in target lipids, which were the presumed site of action. This method was tested using toxicity data for individual model compounds, defined mixtures, and organic mixtures extracted from OSPW. Toxicity was correlated with BE data, which supports the use of this method in hazard assessments of acute lethality to aquatic organisms. A species sensitivity distribution (SSD), based on target lipid model and BE values, was similar to SSDs based on residues in tissues for both nonionic and ionic organics. BE was shown to be an analytical tool that accounts for bioaccumulation of organic compound mixtures from which toxicity can be predicted, with the potential to aid in the development of water quality guidelines.
Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations.
Vorobev, Anatoliy
2010-11-01
We use the Cahn-Hilliard approach to model the slow dissolution dynamics of binary mixtures. An important peculiarity of the Cahn-Hilliard-Navier-Stokes equations is the necessity to use the full continuity equation even for a binary mixture of two incompressible liquids due to dependence of mixture density on concentration. The quasicompressibility of the governing equations brings a short time-scale (quasiacoustic) process that may not affect the slow dynamics but may significantly complicate the numerical treatment. Using the multiple-scale method we separate the physical processes occurring on different time scales and, ultimately, derive the equations with the filtered-out quasiacoustics. The derived equations represent the Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations. This approximation can be further employed as a universal theoretical model for an analysis of slow thermodynamic and hydrodynamic evolution of the multiphase systems with strongly evolving and diffusing interfacial boundaries, i.e., for the processes involving dissolution/nucleation, evaporation/condensation, solidification/melting, polymerization, etc.
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
Li, Jia; Xu, Zhenming; Zhou, Yaohe
2008-05-30
Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Kinetic model for the vibrational energy exchange in flowing molecular gas mixtures. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Offenhaeuser, F.
1987-01-01
The present study is concerned with the development of a computational model for the description of the vibrational energy exchange in flowing gas mixtures, taking into account a given number of energy levels for each vibrational degree of freedom. It is possible to select an arbitrary number of energy levels. The presented model uses values in the range from 10 to approximately 40. The distribution of energy with respect to these levels can differ from the equilibrium distribution. The kinetic model developed can be employed for arbitrary gaseous mixtures with an arbitrary number of vibrational degrees of freedom for each type of gas. The application of the model to CO2-H2ON2-O2-He mixtures is discussed. The obtained relations can be utilized in a study of the suitability of radiation-related transitional processes, involving the CO2 molecule, for laser applications. It is found that the computational results provided by the model agree very well with experimental data obtained for a CO2 laser. Possibilities for the activation of a 16-micron and 14-micron laser are considered.
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Memoized Online Variational Inference for Dirichlet Process Mixture Models
2014-06-27
breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential
Mazel, Vincent; Busignies, Virginie; Duca, Stéphane; Leclerc, Bernard; Tchoreloff, Pierre
2011-05-30
In the pharmaceutical industry, tablets are obtained by the compaction of two or more components which have different physical properties and compaction behaviours. Therefore, it could be interesting to predict the physical properties of the mixture using the single-component results. In this paper, we have focused on the prediction of the compressibility of binary mixtures using the Kawakita model. Microcrystalline cellulose (MCC) and L-alanine were compacted alone and mixed at different weight fractions. The volume reduction, as a function of the compaction pressure, was acquired during the compaction process ("in-die") and after elastic recovery ("out-of-die"). For the pure components, the Kawakita model is well suited to the description of the volume reduction. For binary mixtures, an original approach for the prediction of the volume reduction without using the effective Kawakita parameters was proposed and tested. The good agreement between experimental and predicted data proved that this model was efficient to predict the volume reduction of MCC and L-alanine mixtures during compaction experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
Modeling CO2 mass transfer in amine mixtures: PZ-AMP and PZ-MDEA.
Puxty, Graeme; Rowland, Robert
2011-03-15
The most common method of carbon dioxide (CO(2)) capture is the absorption of CO(2) into a falling thin film of an aqueous amine solution. Modeling of mass transfer during CO(2) absorption is an important way to gain insight and understanding about the underlying processes that are occurring. In this work a new software tool has been used to model CO(2) absorption into aqueous piperazine (PZ) and binary mixtures of PZ with 2-amino-2-methyl-1-propanol (AMP) or methyldiethanolamine (MDEA). The tool solves partial differential and simultaneous equations describing diffusion and chemical reaction automatically derived from reactions written using chemical notation. It has been demonstrated that by using reactions that are chemically plausible the mass transfer in binary mixtures can be fully described by combining the chemical reactions and their associated parameters determined for single amines. The observed enhanced mass transfer in binary mixtures can be explained through chemical interactions occurring in the mixture without need to resort to using additional reactions or unusual transport phenomena such as the "shuttle mechanism".
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.
2002-01-01
Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.
Multiscale Numerical Methods for Non-Equilibrium Plasma
2015-08-01
current paper reports on the implementation of a numerical solver on the Graphic Processing Units (GPUs) to model reactive gas mixtures with detailed...Governing equations The flow ismodeled as amixture of gas specieswhile neglecting viscous effects. The chemical reactions taken place between the gas ...components are to be modeled in great detail. The set of the Euler equations for a reactive gas mixture can be written as: ∂Q ∂t + ∇ · F̄ = Ω̇ (1) where Q
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
NASA Astrophysics Data System (ADS)
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
NASA Astrophysics Data System (ADS)
Xie, M.; Agus, S. S.; Schanz, T.; Kolditz, O.
2004-12-01
This paper presents an upscaling concept of swelling/shrinking processes of a compacted bentonite/sand mixture, which also applies to swelling of porous media in general. A constitutive approach for highly compacted bentonite/sand mixture is developed accordingly. The concept is based on the diffuse double layer theory and connects microstructural properties of the bentonite as well as chemical properties of the pore fluid with swelling potential. Main factors influencing the swelling potential of bentonite, i.e. variation of water content, dry density, chemical composition of pore fluid, as well as the microstructures and the amount of swelling minerals are taken into account. According to the proposed model, porosity is divided into interparticle and interlayer porosity. Swelling is the potential of interlayer porosity increase, which reveals itself as volume change in the case of free expansion, or turns to be swelling pressure in the case of constrained swelling. The constitutive equations for swelling/shrinking are implemented in the software GeoSys/RockFlow as a new chemo-hydro-mechanical model, which is able to simulate isothermal multiphase flow in bentonite. Details of the mathematical and numerical multiphase flow formulations, as well as the code implementation are described. The proposed model is verified using experimental data of tests on a highly compacted bentonite/sand mixture. Comparison of the 1D modelling results with the experimental data evidences the capability of the proposed model to satisfactorily predict free swelling of the material under investigation. Copyright
NASA Astrophysics Data System (ADS)
Agaoglu, B.; Scheytt, T. J.; Copty, N. K.
2011-12-01
This study examines the mechanistic processes governing multiphase flow of a water-cosolvent-NAPL system in saturated porous media. Laboratory batch and column flushing experiments were conducted to determine the equilibrium properties of pure NAPL and synthetically prepared NAPL mixtures as well as NAPL recovery mechanisms for different water-ethanol contents. The effect of contact time was investigated by considering different steady and intermittent flow velocities. A modified version of multiphase flow simulator (UTCHEM) was used to compare the multiphase model simulations with the column experiment results. The effect of employing different grid geometries (1D, 2D, 3D), heterogeneity and different initial NAPL saturation configurations were also examined in the model. It is shown that the change in velocity affects the mass transfer rate between phases as well as the ultimate NAPL recovery percentage. The experiments with slow flow rate flushing of pure NAPL and the 3D UTCHEM simulations gave similar effluent concentrations and NAPL cumulative recoveries. The results were less consistent for fast non-equilibrium flow conditions. The dissolution process from the NAPL mixture into the water-ethanol flushing solutions was found to be more complex than dissolution expressions incorporated in the numerical model. The dissolution rate of individual organic compounds (namely Toluene and Benzene) from a mixture NAPL into the ethanol-water flushing solution is found not to correlate with their equilibrium solubility values.The implications of this controlled experimental and modeling study on field cosolvent remediation applications are discussed.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel
2017-04-01
Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.
Predicting mixed-gas adsorption equilibria on activated carbon for precombustion CO2 capture.
García, S; Pis, J J; Rubiera, F; Pevida, C
2013-05-21
We present experimentally measured adsorption isotherms of CO2, H2, and N2 on a phenol-formaldehyde resin-based activated carbon, which had been previously synthesized for the separation of CO2 in a precombustion capture process. The single component adsorption isotherms were measured in a magnetic suspension balance at three different temperatures (298, 318, and 338 K) and over a large range of pressures (from 0 to 3000-4000 kPa). These values cover the temperature and pressure conditions likely to be found in a precombustion capture scenario, where CO2 needs to be separated from a CO2/H2/N2 gas stream at high pressure (~1000-1500 kPa) and with a high CO2 concentration (~20-40 vol %). Data on the pure component isotherms were correlated using the Langmuir, Sips, and dual-site Langmuir (DSL) models, i.e., a two-, three-, and four-parameter model, respectively. By using the pure component isotherm fitting parameters, adsorption equilibrium was then predicted for multicomponent gas mixtures by the extended models. The DSL model was formulated considering the energetic site-matching concept, recently addressed in the literature. Experimental gas-mixture adsorption equilibrium data were calculated from breakthrough experiments conducted in a lab-scale fixed-bed reactor and compared with the predictions from the models. Breakthrough experiments were carried out at a temperature of 318 K and five different pressures (300, 500, 1000, 1500, and 2000 kPa) where two different CO2/H2/N2 gas mixtures were used as the feed gas in the adsorption step. The DSL model was found to be the one that most accurately predicted the CO2 adsorption equilibrium in the multicomponent mixture. The results presented in this work highlight the importance of performing experimental measurements of mixture adsorption equilibria, as they are of utmost importance to discriminate between models and to correctly select the one that most closely reflects the actual process.
Structure of turbulent non-premixed flames modeled with two-step chemistry
NASA Technical Reports Server (NTRS)
Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.
1992-01-01
Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.
Effect of inorganic salts on the volatility of organic acids.
Häkkinen, Silja A K; McNeill, V Faye; Riipinen, Ilona
2014-12-02
Particulate phase reactions between organic and inorganic compounds may significantly alter aerosol chemical properties, for example, by suppressing particle volatility. Here, chemical processing upon drying of aerosols comprised of organic (acetic, oxalic, succinic, or citric) acid/monovalent inorganic salt mixtures was assessed by measuring the evaporation of the organic acid molecules from the mixture using a novel approach combining a chemical ionization mass spectrometer coupled with a heated flow tube inlet (TPD-CIMS) with kinetic model calculations. For reference, the volatility, i.e. saturation vapor pressure and vaporization enthalpy, of the pure succinic and oxalic acids was also determined and found to be in agreement with previous literature. Comparison between the kinetic model and experimental data suggests significant particle phase processing forming low-volatility material such as organic salts. The results were similar for both ammonium sulfate and sodium chloride mixtures, and relatively more processing was observed with low initial aerosol organic molar fractions. The magnitude of low-volatility organic material formation at an atmospherically relevant pH range indicates that the observed phenomenon is not only significant in laboratory conditions but is also of direct atmospheric relevance.
Nonparametric Bayesian inference for mean residual life functions in survival analysis.
Poynor, Valerie; Kottas, Athanasios
2018-01-19
Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hess, Julian; Wang, Yongqi
2016-11-01
A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
A coupled chemo-thermo-hygro-mechanical model of concrete at high temperature and failure analysis
NASA Astrophysics Data System (ADS)
Li, Xikui; Li, Rongtao; Schrefler, B. A.
2006-06-01
A hierarchical mathematical model for analyses of coupled chemo-thermo-hygro-mechanical behaviour in concretes at high temperature is presented. The concretes are modelled as unsaturated deforming reactive porous media filled with two immiscible pore fluids, i.e. the gas mixture and the liquid mixture, in immiscible-miscible levels. The thermo-induced desalination process is particularly integrated into the model. The chemical effects of both the desalination and the dehydration processes on the material damage and the degradation of the material strength are taken into account. The mathematical model consists of a set of coupled, partial differential equations governing the mass balance of the dry air, the mass balance of the water species, the mass balance of the matrix components dissolved in the liquid phases, the enthalpy (energy) balance and momentum balance of the whole medium mixture. The governing equations, the state equations for the model and the constitutive laws used in the model are given. A mixed weak form for the finite element solution procedure is formulated for the numerical simulation of chemo-thermo-hygro-mechanical behaviours. Special considerations are given to spatial discretization of hyperbolic equation with non-self-adjoint operator nature. Numerical results demonstrate the performance and the effectiveness of the proposed model and its numerical procedure in reproducing coupled chemo-thermo-hygro-mechanical behaviour in concretes subjected to fire and thermal radiation.
NASA Astrophysics Data System (ADS)
Tararykov, A. V.; Garyaev, A. B.
2017-11-01
The possibility of increasing the energy efficiency of production processes by converting the initial fuel - natural gas to synthesized fuel using the heat of the exhaust gases of plants involved in production is considered. Possible applications of this technology are given. A mathematical model of the processes of heat and mass transfer occurring in a thermochemical reactor is developed taking into account the nonequilibrium nature of the course of chemical reactions of fuel conversion. The possibility of using microchannel reaction elements and facilities for methane conversion in order to intensify the process and reduce the overall dimensions of plants is considered. The features of the course of heat and mass transfer processes under flow conditions in microchannel reaction elements are described. Additions have been made to the mathematical model, which makes it possible to use it for microchannel installations. With the help of a mathematical model, distribution of the parameters of mixtures along the length of the reaction element of the reactor-temperature, the concentration of the reacting components, the velocity, and the values of the heat fluxes are obtained. The calculations take into account the change in the thermophysical properties of the mix-ture, the type of the catalytic element, the rate of the reactions, the heat exchange processes by radiation, and the lon-gitudinal heat transfer along the flow of the reacting mixture. The reliability of the results of the application of the mathematical model is confirmed by their comparison with the experimental data obtained by Grasso G., Schaefer G., Schuurman Y., Mirodatos C., Kuznetsov V.V., Vitovsky O.V. on similar installations.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
The calculation of the phase equilibrium of the multicomponent hydrocarbon systems
NASA Astrophysics Data System (ADS)
Molchanov, D. A.
2018-01-01
Hydrocarbon mixtures filtration process simulation development has resulted in use of cubic equations of state of the van der Waals type to describe the thermodynamic properties of natural fluids under real thermobaric conditions. Binary hydrocarbon systems allow to simulate the fluids of different types of reservoirs qualitatively, what makes it possible to carry out the experimental study of their filtration features. Exploitation of gas-condensate reservoirs shows the possibility of existence of various two-phase filtration regimes, including self-oscillatory one, which occurs under certain values of mixture composition, temperature and pressure drop. Plotting of the phase diagram of the model mixture is required to determine these values. A software package to calculate the vapor-liquid equilibrium of binary systems using cubic equation of state of the van der Waals type has been created. Phase diagrams of gas-condensate model mixtures have been calculated.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparison of numerical simulation and experimental data for steam-in-place sterilization
NASA Technical Reports Server (NTRS)
Young, Jack H.; Lasher, William C.
1993-01-01
A complex problem involving convective flow of a binary mixture containing a condensable vapor and noncondensable gas in a partially enclosed chamber was modelled and results compared to transient experimental values. The finite element model successfully predicted transport processes in dead-ended tubes with inside diameters of 0.4 to 1.0 cm. When buoyancy driven convective flow was dominant, temperature and mixture compositions agreed with experimental data. Data from 0.4 cm tubes indicate diffusion to be the primary air removal method in small diameter tubes and the diffusivity value in the model to be too large.
A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.
Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito
2017-04-01
This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.
NASA Astrophysics Data System (ADS)
Baadj, S.; Harrache, Z.; Belasri, A.
2013-12-01
The aim of this work is to highlight, through numerical modeling, the chemical and the electrical characteristics of xenon chloride mixture in XeCl* (308 nm) excimer lamp created by a dielectric barrier discharge. A temporal model, based on the Xe/Cl2 mixture chemistry, the circuit and the Boltzmann equations, is constructed. The effects of operating voltage, Cl2 percentage in the Xe/Cl2 gas mixture, dielectric capacitance, as well as gas pressure on the 308-nm photon generation, under typical experimental operating conditions, have been investigated and discussed. The importance of charged and excited species, including the major electronic and ionic processes, is also demonstrated. The present calculations show clearly that the model predicts the optimal operating conditions and describes the electrical and chemical properties of the XeCl* exciplex lamp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guevara-Carrion, Gabriela; Janzen, Tatjana; Muñoz-Muñoz, Y. Mauricio
Mutual diffusion coefficients of all 20 binary liquid mixtures that can be formed out of methanol, ethanol, acetone, benzene, cyclohexane, toluene, and carbon tetrachloride without a miscibility gap are studied at ambient conditions of temperature and pressure in the entire composition range. The considered mixtures show a varying mixing behavior from almost ideal to strongly non-ideal. Predictive molecular dynamics simulations employing the Green-Kubo formalism are carried out. Radial distribution functions are analyzed to gain an understanding of the liquid structure influencing the diffusion processes. It is shown that cluster formation in mixtures containing one alcoholic component has a significant impactmore » on the diffusion process. The estimation of the thermodynamic factor from experimental vapor-liquid equilibrium data is investigated, considering three excess Gibbs energy models, i.e., Wilson, NRTL, and UNIQUAC. It is found that the Wilson model yields the thermodynamic factor that best suits the simulation results for the prediction of the Fick diffusion coefficient. Four semi-empirical methods for the prediction of the self-diffusion coefficients and nine predictive equations for the Fick diffusion coefficient are assessed and it is found that methods based on local composition models are more reliable. Finally, the shear viscosity and thermal conductivity are predicted and in most cases favorably compared with experimental literature values.« less
Deposition efficiency optimization in cold spraying of metal-ceramic powder mixtures
NASA Astrophysics Data System (ADS)
Klinkov, S. V.; Kosarev, V. F.
2017-10-01
In the present paper, results of optimization of the cold spray deposition process of a metal-ceramic powder mixture involving impacts of ceramic particles onto coating surface are reported. In the optimization study, a two-probability model was used to take into account the surface activation induced by the ceramic component of the mixture. The dependence of mixture deposition efficiency on the concentration and size of ceramic particles was analysed to identify the ranges of both parameters in which the effect due to ceramic particles on the mixture deposition efficiency was positive. The dependences of the optimum size and concentration of ceramic particles, and also the maximum gain in deposition efficiency, on the probability of adhesion of metal particles to non-activated coating surface were obtained.
Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation
2017-03-01
Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural
Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.
Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M
1998-01-01
Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897
Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.
Petric, Ivan; Mustafić, Nesib
2015-09-15
Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myint, P. C.; Hao, Y.; Firoozabadi, A.
2015-03-27
Thermodynamic property calculations of mixtures containing carbon dioxide (CO 2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO 2 activity coefficient model by Duanmore » and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO 2, pure water, and both CO 2-rich and aqueous (H 2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO 2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H 2O-CO 2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.« less
The influence of surface-active agents in gas mixture on the intensity of jet condensation
NASA Astrophysics Data System (ADS)
Yezhov, YV; Okhotin, VS
2017-11-01
The report presents: the methodology of calculation of contact condensation of steam from the steam-gas mixture into the stream of water, taking into account: the mass flow of steam through the boundary phase, particularly the change in turbulent transport properties near the interface and their connection to the interface perturbations due to the surface tension of the mixture; the method of calculation of the surface tension at the interface water - a mixture of fluorocarbon vapor and water, based on the previously established analytical methods we calculate the surface tension for simple one - component liquid-vapor systems. The obtained analytical relation to calculate the surface tension of the mixture is a function of temperature and volume concentration of the fluorocarbon gas in the mixture and is true for all sizes of gas molecules. On the newly created experimental stand is made verification of experimental studies to determine the surface tension of pure substances: water, steam, C3F8 pair C3F8, produced the first experimental data on surface tension at the water - a mixture of water vapor and fluorocarbon C3F8. The obtained experimental data allow us to refine the values of the two constants used in the calculated model of the surface tension of the mixture. Experimental study of jet condensation was carried out with the flow in the zone of condensation of different gases. The condensation process was monitored by measurement of consumption of water flowing from the nozzle, and the formed condensate. When submitting C3F8, there was a noticeable, intensification condensation process compared with the condensation of pure water vapor. The calculation results are in satisfactory agreement with the experimental data on surface tension of the mixture and steam condensation from steam-gas mixture. Analysis of calculation results shows that the presence of surfactants in the condensation zone affects the partial vapor pressure on the interfacial surface, and the thermal conductivity of the liquid jet. The first circumstance leads to deterioration of the condensation process, the second to the intensification of this process. There is obviously an optimum value of concentration of the additive surfactants to the vapour when the condensation process is maximum. According to the developed design methodology contact condensation can evaluate these optimum conditions, their practical effect in the field study.
Tian, Liang; Russell, Alan; Anderson, Iver
2014-01-03
Deformation processed metal–metal composites (DMMCs) are high-strength, high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases. The extraordinarily high strength of DMMCs is underestimated using the rule of mixture (or volumetric weighted average) of conventionally work-hardened metals. A dislocation-density-based, strain–gradient–plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of DMMCs. The model prediction was compared with our experimental findings of Cu–Nb, Cu–Ta, and Al–Ti DMMC systems to verify the applicability of the new model. The results show that this model predicts themore » strength of DMMCs better than the rule-of-mixture model. The strain-gradient effect, responsible for the exceptionally high strength of heavily cold worked DMMCs, is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length.« less
Diversifying mechanisms in the on-farm evolution of crop mixtures.
Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle
2015-06-01
While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, E.K.H.; Funkenbusch, P.D.
1993-06-01
Hot isostatic pressing (HIP) of powder mixtures (containing differently sized components) and of composite powders is analyzed. Recent progress, including development of a simple scheme for estimating radial distribution functions, has made modeling of these systems practical. Experimentally, powders containing bimodal or continuous size distributions are observed to hot isostatically press to a higher density tinder identical processing conditions and to show large differences in the densification rate as a function of density when compared with the monosize powders usually assumed for modeling purposes. Modeling correctly predicts these trends and suggests that they can be partially, but not entirely, attributedmore » to initial packing density differences. Modeling also predicts increased deformation in the smaller particles within a mixture. This effect has also been observed experimentally and is associated with microstructural changes, such as preferential recrystallization of small particles. Finally, consolidation of a composite mixture containing hard, but deformable, inclusions has been modeled for comparison with existing experimental data. Modeling results match both the densification and microstructural observations reported experimentally. Densification is retarded due to contacts between the reinforcing particles which support a significant portion of the applied pressure. In addition, partitioning of deformation between soft matrix and hard inclusion powders results in increased deformation of the softer material.« less
Bioethanol production optimization: a thermodynamic analysis.
Alvarez, Víctor H; Rivera, Elmer Ccopa; Costa, Aline C; Filho, Rubens Maciel; Wolf Maciel, Maria Regina; Aznar, Martín
2008-03-01
In this work, the phase equilibrium of binary mixtures for bioethanol production by continuous extractive process was studied. The process is composed of four interlinked units: fermentor, centrifuge, cell treatment unit, and flash vessel (ethanol-congener separation unit). A proposal for modeling the vapor-liquid equilibrium in binary mixtures found in the flash vessel has been considered. This approach uses the Predictive Soave-Redlich-Kwong equation of state, with original and modified molecular parameters. The congeners considered were acetic acid, acetaldehyde, furfural, methanol, and 1-pentanol. The results show that the introduction of new molecular parameters r and q in the UNIFAC model gives more accurate predictions for the concentration of the congener in the gas phase for binary and ternary systems.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture
NASA Astrophysics Data System (ADS)
Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo
2018-03-01
For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.
Effect of Inorganic Salts on the Volatility of Organic Acids
2014-01-01
Particulate phase reactions between organic and inorganic compounds may significantly alter aerosol chemical properties, for example, by suppressing particle volatility. Here, chemical processing upon drying of aerosols comprised of organic (acetic, oxalic, succinic, or citric) acid/monovalent inorganic salt mixtures was assessed by measuring the evaporation of the organic acid molecules from the mixture using a novel approach combining a chemical ionization mass spectrometer coupled with a heated flow tube inlet (TPD-CIMS) with kinetic model calculations. For reference, the volatility, i.e. saturation vapor pressure and vaporization enthalpy, of the pure succinic and oxalic acids was also determined and found to be in agreement with previous literature. Comparison between the kinetic model and experimental data suggests significant particle phase processing forming low-volatility material such as organic salts. The results were similar for both ammonium sulfate and sodium chloride mixtures, and relatively more processing was observed with low initial aerosol organic molar fractions. The magnitude of low-volatility organic material formation at an atmospherically relevant pH range indicates that the observed phenomenon is not only significant in laboratory conditions but is also of direct atmospheric relevance. PMID:25369247
Song, Mingkai; Jiao, Pengfei; Qin, Taotao; Jiang, Kangkang; Zhou, Jingwei; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2017-10-01
An innovative benign process for recovery lactic acid from its fermentation broth is proposed using a novel hyper-cross-linked meso-micropore resin and water as eluent. This work focuses on modeling the competitive adsorption behaviors of glucose, lactic acid and acetic acid ternary mixture and explosion of the adsorption mechanism. The characterization results showed the resin had a large BET surface area and specific pore structure with hydrophobic properties. By analysis of the physicochemical properties of the solutes and the resin, the mechanism of the separation is proposed as hydrophobic effect and size-exclusion. Subsequently three chromatographic models were applied to predict the competitive breakthrough curves of the ternary mixture under different operating conditions. The pore diffusion was the major limiting factor for the adsorption process, which was consistent with the BET results. The novel HD-06 resin can be a good potential adsorbent for the future SMB continuous separation process. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baadj, S.; Harrache, Z., E-mail: zharrache@yahoo.com; Belasri, A.
2013-12-15
The aim of this work is to highlight, through numerical modeling, the chemical and the electrical characteristics of xenon chloride mixture in XeCl* (308 nm) excimer lamp created by a dielectric barrier discharge. A temporal model, based on the Xe/Cl{sub 2} mixture chemistry, the circuit and the Boltzmann equations, is constructed. The effects of operating voltage, Cl{sub 2} percentage in the Xe/Cl{sub 2} gas mixture, dielectric capacitance, as well as gas pressure on the 308-nm photon generation, under typical experimental operating conditions, have been investigated and discussed. The importance of charged and excited species, including the major electronic and ionicmore » processes, is also demonstrated. The present calculations show clearly that the model predicts the optimal operating conditions and describes the electrical and chemical properties of the XeCl* exciplex lamp.« less
Development of a Scale-up Tool for Pervaporation Processes
Thiess, Holger; Strube, Jochen
2018-01-01
In this study, an engineering tool for the design and optimization of pervaporation processes is developed based on physico-chemical modelling coupled with laboratory/mini-plant experiments. The model incorporates the solution-diffusion-mechanism, polarization effects (concentration and temperature), axial dispersion, pressure drop and the temperature drop in the feed channel due to vaporization of the permeating components. The permeance, being the key model parameter, was determined via dehydration experiments on a mini-plant scale for the binary mixtures ethanol/water and ethyl acetate/water. A second set of experimental data was utilized for the validation of the model for two chemical systems. The industrially relevant ternary mixture, ethanol/ethyl acetate/water, was investigated close to its azeotropic point and compared to a simulation conducted with the determined binary permeance data. Experimental and simulation data proved to agree very well for the investigated process conditions. In order to test the scalability of the developed engineering tool, large-scale data from an industrial pervaporation plant used for the dehydration of ethanol was compared to a process simulation conducted with the validated physico-chemical model. Since the membranes employed in both mini-plant and industrial scale were of the same type, the permeance data could be transferred. The comparison of the measured and simulated data proved the scalability of the derived model. PMID:29342956
The practice of quality-associated costing: application to transfusion manufacturing processes.
Trenchard, P M; Dixon, R
1997-01-01
This article applies the new method of quality-associated costing (QAC) to the mixture of processes that create red cell and plasma products from whole blood donations. The article compares QAC with two commonly encountered but arbitrary models and illustrates the invalidity of clinical cost-benefit analysis based on these models. The first, an "isolated" cost model, seeks to allocate each whole process cost to only one product class. The other is a "shared" cost model, and it seeks to allocate an approximately equal share of all process costs to all associated products.
NASA Astrophysics Data System (ADS)
Ushakov, Anton; Orlov, Alexey; Sovach, Victor P.
2018-03-01
This article presents the results of research filling of gas centrifuge cascade for separation of the multicomponent isotope mixture with process gas by various feed flow rate. It has been used mathematical model of the nonstationary hydraulic and separation processes occurring in the gas centrifuge cascade. The research object is definition of the regularity transient of nickel isotopes into cascade during filling of the cascade. It is shown that isotope concentrations into cascade stages after its filling depend on variable parameters and are not equal to its concentration on initial isotope mixture (or feed flow of cascade). This assumption is used earlier any researchers for modeling such nonstationary process as set of steady-state concentration of isotopes into cascade. Article shows physical laws of isotope distribution into cascade stage after its filling. It's shown that varying each parameters of cascade (feed flow rate, feed stage number or cascade stage number) it is possible to change isotope concentration on output cascade flows (light or heavy fraction) for reduction of duration of further process to set of steady-state concentration of isotopes into cascade.
Using partially labeled data for normal mixture identification with application to class definition
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
A simple approach to polymer mixture miscibility.
Higgins, Julia S; Lipson, Jane E G; White, Ronald P
2010-03-13
Polymeric mixtures are important materials, but the control and understanding of mixing behaviour poses problems. The original Flory-Huggins theoretical approach, using a lattice model to compute the statistical thermodynamics, provides the basic understanding of the thermodynamic processes involved but is deficient in describing most real systems, and has little or no predictive capability. We have developed an approach using a lattice integral equation theory, and in this paper we demonstrate that this not only describes well the literature data on polymer mixtures but allows new insights into the behaviour of polymers and their mixtures. The characteristic parameters obtained by fitting the data have been successfully shown to be transferable from one dataset to another, to be able to correctly predict behaviour outside the experimental range of the original data and to allow meaningful comparisons to be made between different polymer mixtures.
Pattern analysis of community health center location in Surabaya using spatial Poisson point process
NASA Astrophysics Data System (ADS)
Kusumaningrum, Choriah Margareta; Iriawan, Nur; Winahju, Wiwiek Setya
2017-11-01
Community health center (puskesmas) is one of the closest health service facilities for the community, which provide healthcare for population on sub-district level as one of the government-mandated community health clinics located across Indonesia. The increasing number of this puskesmas does not directly comply the fulfillment of basic health services needed in such region. Ideally, a puskesmas has to cover up to maximum 30,000 people. The number of puskesmas in Surabaya indicates an unbalance spread in all of the area. This research aims to analyze the spread of puskesmas in Surabaya using spatial Poisson point process model in order to get the effective location of Surabaya's puskesmas which based on their location. The results of the analysis showed that the distribution pattern of puskesmas in Surabaya is non-homogeneous Poisson process and is approched by mixture Poisson model. Based on the estimated model obtained by using Bayesian mixture model couple with MCMC process, some characteristics of each puskesmas have no significant influence as factors to decide the addition of health center in such location. Some factors related to the areas of sub-districts have to be considered as covariate to make a decision adding the puskesmas in Surabaya.
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
Wang, Quan-Ying; Sun, Jing-Yue; Xu, Xing-Jian; Yu, Hong-Wen
2018-06-20
Because the extensive use of Cu-based fungicides, the accumulation of Cu in agricultural soil has been widely reported. However, little information is known about the bioavailability of Cu deriving from different fungicides in soil. This paper investigated both the distribution behaviors of Cu from two commonly used fungicides (Bordeaux mixture and copper oxychloride) during the aging process and the toxicological effects of Cu on earthworms. Copper nitrate was selected as a comparison during the aging process. The distribution process of exogenous Cu into different soil fractions involved an initial rapid retention (the first 8 weeks) and a following slow continuous retention. Moreover, Cu mainly moved from exchangeable and carbonate fractions to Fe-Mn oxides-combined fraction during the aging process. The Elovich model fit well with the available Cu aging process, and the transformation rate was in the order of Cu(NO 3 ) 2 > Bordeaux mixture > copper oxychloride. On the other hand, the biological responses of earthworms showed that catalase activities and malondialdehyde contents of the copper oxychloride treated earthworms were significantly higher than those of Bordeaux mixture treated earthworms. Also, body Cu loads of earthworms from different Cu compounds spiked soils were in the following order: copper oxychloride > Bordeaux mixture. Thus, the bioavailability of Cu from copper oxychloride in soil was significantly higher than that of Bordeaux mixture, and different Cu compounds should be taken into consideration when studying the bioavailability of Cu-based fungicides in the soil. Copyright © 2018 Elsevier Inc. All rights reserved.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Mixture and odorant processing in the olfactory systems of insects: a comparative perspective.
Clifford, Marie R; Riffell, Jeffrey A
2013-11-01
Natural olfactory stimuli are often complex mixtures of volatiles, of which the identities and ratios of constituents are important for odor-mediated behaviors. Despite this importance, the mechanism by which the olfactory system processes this complex information remains an area of active study. In this review, we describe recent progress in how odorants and mixtures are processed in the brain of insects. We use a comparative approach toward contrasting olfactory coding and the behavioral efficacy of mixtures in different insect species, and organize these topics around four sections: (1) Examples of the behavioral efficacy of odor mixtures and the olfactory environment; (2) mixture processing in the periphery; (3) mixture coding in the antennal lobe; and (4) evolutionary implications and adaptations for olfactory processing. We also include pertinent background information about the processing of individual odorants and comparative differences in wiring and anatomy, as these topics have been richly investigated and inform the processing of mixtures in the insect olfactory system. Finally, we describe exciting studies that have begun to elucidate the role of the processing of complex olfactory information in evolution and speciation.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
Mathematical modeling of a single stage ultrasonically assisted distillation process.
Mahdi, Taha; Ahmad, Arshad; Ripin, Adnan; Abdullah, Tuan Amran Tuan; Nasef, Mohamed M; Ali, Mohamad W
2015-05-01
The ability of sonication phenomena in facilitating separation of azeotropic mixtures presents a promising approach for the development of more intensified and efficient distillation systems than conventional ones. To expedite the much-needed development, a mathematical model of the system based on conservation principles, vapor-liquid equilibrium and sonochemistry was developed in this study. The model that was founded on a single stage vapor-liquid equilibrium system and enhanced with ultrasonic waves was coded using MATLAB simulator and validated with experimental data for ethanol-ethyl acetate mixture. The effects of both ultrasonic frequency and intensity on the relative volatility and azeotropic point were examined, and the optimal conditions were obtained using genetic algorithm. The experimental data validated the model with a reasonable accuracy. The results of this study revealed that the azeotropic point of the mixture can be totally eliminated with the right combination of sonication parameters and this can be utilized in facilitating design efforts towards establishing a workable ultrasonically intensified distillation system. Copyright © 2014 Elsevier B.V. All rights reserved.
Raeissi, Sona; Haghbakhsh, Reza; Florusse, Louw J; Peters, Cor J
Mixtures of carbon dioxide and secondary butyl alcohol at high pressures are interesting for a range of industrial applications. Therefore, it is important to have trustworthy experimental data on the high-pressure phase behavior of this mixture over a wide range of temperatures. In addition, an accurate thermodynamic model is necessary for the optimal design and operation of processes. In this study, bubble points of binary mixtures of CO 2 + secondary butyl alcohol were measured using a synthetic method. Measurements covered a CO 2 molar concentration range of (0.10-0.57) % and temperatures from (293 to 370) K, with pressures reaching up to 11 MPa. The experimental data were modelled by the cubic plus association (CPA) equation of state (EoS), as well as the more simple Soave-Redlich-Kwong (SRK) EoS. Predictive and correlative modes were considered for both models. In the predictive mode, the CPA performs better than the SRK because it also considers associations.
NASA Astrophysics Data System (ADS)
Zohdi, T. I.
2017-07-01
A key part of emerging advanced additive manufacturing methods is the deposition of specialized particulate mixtures of materials on substrates. For example, in many cases these materials are polydisperse powder mixtures whereby one set of particles is chosen with the objective to electrically, thermally or mechanically functionalize the overall mixture material and another set of finer-scale particles serves as an interstitial filler/binder. Often, achieving controllable, precise, deposition is difficult or impossible using mechanical means alone. It is for this reason that electromagnetically-driven methods are being pursued in industry, whereby the particles are ionized and an electromagnetic field is used to guide them into place. The goal of this work is to develop a model and simulation framework to investigate the behavior of a deposition as a function of an applied electric field. The approach develops a modular discrete-element type method for the simulation of the particle dynamics, which provides researchers with a framework to construct computational tools for this growing industry.
NASA Astrophysics Data System (ADS)
Graham, R. A.
2012-10-01
Disturbed geology within a several km diameter surface area of sedimentary Carrizo Sandstone near Uvalde, Texas, indicates the presence of a partially buried meteorite impact crater. Identification of its impact origin is supported by detailed studies but quartz grains recovered from distances of about100 km from the structure also show planar deformation features (PDFs). While PDFs are recognized as uniquely from impact processes, quantitative interpretation requires extension of Hugoniot materials models to more realistic grain-level, mixture models. Carrizo sandstone is a porous mixture of fine quartz and goethite. At impact pressures of tens of GPa, goethite separates into hematite and water vapor upon release of impact pressure. Samples from six different locations up to 50 km from the impact site preserve characteristic features resulting from mixtures of goethite, its water vapor, hematite and quartz. Spheroids resulting from local radial acceleration of mixed density, hot products are common at various sites. Local hydrodynamic instabilities cause similar effects.
Coronado, M; Segadães, A M; Andrés, A
2015-12-15
This work describes the leaching behavior of potentially hazardous metals from three different clay-based industrial ceramic products (wall bricks, roof tiles, and face bricks) containing foundry sand dust and Waelz slag as alternative raw materials. For each product, ten mixtures were defined by mixture design of experiments and the leaching of As, Ba, Cd, Cr, Cu, Mo, Ni, Pb, and Zn was evaluated in pressed specimens fired simulating the three industrial ceramic processes. The results showed that, despite the chemical, mineralogical and processing differences, only chrome and molybdenum were not fully immobilized during ceramic processing. Their leaching was modeled as polynomial equations, functions of the raw materials contents, and plotted as response surfaces. This brought to evidence that Cr and Mo leaching from the fired products is not only dependent on the corresponding contents and the basicity of the initial mixtures, but is also clearly related with the mineralogical composition of the fired products, namely the amount of the glassy phase, which depends on both the major oxides contents and the firing temperature. Copyright © 2015 Elsevier B.V. All rights reserved.
Rath, Swagat S; Nayak, Pradeep; Mukherjee, P S; Roy Chaudhury, G; Mishra, B K
2012-03-01
The global crisis of the hazardous electronic waste (E-waste) is on the rise due to increasing usage and disposal of electronic devices. A process was developed to treat E-waste in an environmentally benign process. The process consisted of thermal plasma treatment followed by recovery of metal values through mineral acid leaching. In the thermal process, the E-waste was melted to recover the metal values as a metallic mixture. The metallic mixture was subjected to acid leaching in presence of depolarizer. The leached liquor mainly contained copper as the other elements like Al and Fe were mostly in alloy form as per the XRD and phase diagram studies. Response surface model was used to optimize the conditions for leaching. More than 90% leaching efficiency at room temperature was observed for Cu, Ni and Co with HCl as the solvent, whereas Fe and Al showed less than 40% efficiency. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Study of Cavitation-Ignition Bubble Combustion
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Jacqmin, David A.
2005-01-01
We present the results of an experimental and computational study of the physics and chemistry of cavitation-ignition bubble combustion (CIBC), a process that occurs when combustible gaseous mixtures are ignited by the high temperatures found inside a rapidly collapsing bubble. The CIBC process was modeled using a time-dependent compressible fluid-dynamics code that includes finite-rate chemistry. The model predicts that gas-phase reactions within the bubble produce CO and other gaseous by-products of combustion. In addition, heat and mechanical energy release through a bubble volume-expansion phase are also predicted by the model. We experimentally demonstrate the CIBC process using an ultrasonically excited cavitation flow reactor with various hydrocarbon-air mixtures in liquid water. Low concentrations (< 160 ppm) of carbon monoxide (CO) emissions from the ultrasonic reactor were measured, and found to be proportional to the acoustic excitation power. The results of the model were consistent with the measured experimental results. Based on the experimental findings, the computational model, and previous reports of the "micro-diesel effect" in industrial hydraulic systems, we conclude that CIBC is indeed possible and exists in ultrasonically- and hydrodynamically-induced cavitation. Finally, estimates of the utility of CIBC process as a means of powering an idealized heat engine are also presented.
Sacristan, C J; Dupont, T; Sicot, O; Leclaire, P; Verdière, K; Panneton, R; Gong, X L
2016-10-01
The acoustic properties of an air-saturated macroscopically inhomogeneous aluminum foam in the equivalent fluid approximation are studied. A reference sample built by forcing a highly compressible melamine foam with conical shape inside a constant diameter rigid tube is studied first. In this process, a radial compression varying with depth is applied. With the help of an assumption on the compressed pore geometry, properties of the reference sample can be modelled everywhere in the thickness and it is possible to use the classical transfer matrix method as theoretical reference. In the mixture approach, the material is viewed as a mixture of two known materials placed in a patchwork configuration and with proportions of each varying with depth. The properties are derived from the use of a mixing law. For the reference sample, the classical transfer matrix method is used to validate the experimental results. These results are used to validate the mixture approach. The mixture approach is then used to characterize a porous aluminium for which only the properties of the external faces are known. A porosity profile is needed and is obtained from the simulated annealing optimization process.
Yan, Luchun; Liu, Jiemin; Jiang, Shen; Wu, Chuandong; Gao, Kewei
2017-07-13
The olfactory evaluation function (e.g., odor intensity rating) of e-nose is always one of the most challenging issues in researches about odor pollution monitoring. But odor is normally produced by a set of stimuli, and odor interactions among constituents significantly influenced their mixture's odor intensity. This study investigated the odor interaction principle in odor mixtures of aldehydes and esters, respectively. Then, a modified vector model (MVM) was proposed and it successfully demonstrated the similarity of the odor interaction pattern among odorants of the same type. Based on the regular interaction pattern, unlike a determined empirical model only fit for a specific odor mixture in conventional approaches, the MVM distinctly simplified the odor intensity prediction of odor mixtures. Furthermore, the MVM also provided a way of directly converting constituents' chemical concentrations to their mixture's odor intensity. By combining the MVM with usual data-processing algorithm of e-nose, a new e-nose system was established for an odor intensity rating. Compared with instrumental analysis and human assessor, it exhibited accuracy well in both quantitative analysis (Pearson correlation coefficient was 0.999 for individual aldehydes ( n = 12), 0.996 for their binary mixtures ( n = 36) and 0.990 for their ternary mixtures ( n = 60)) and odor intensity assessment (Pearson correlation coefficient was 0.980 for individual aldehydes ( n = 15), 0.973 for their binary mixtures ( n = 24), and 0.888 for their ternary mixtures ( n = 25)). Thus, the observed regular interaction pattern is considered an important foundation for accelerating extensive application of olfactory evaluation in odor pollution monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehl, M; Kukkadapu, G; Kumar, K
The use of gasoline in homogeneous charge compression ignition engines (HCCI) and in duel fuel diesel - gasoline engines, has increased the need to understand its compression ignition processes under engine-like conditions. These processes need to be studied under well-controlled conditions in order to quantify low temperature heat release and to provide fundamental validation data for chemical kinetic models. With this in mind, an experimental campaign has been undertaken in a rapid compression machine (RCM) to measure the ignition of gasoline mixtures over a wide range of compression temperatures and for different compression pressures. By measuring the pressure history duringmore » ignition, information on the first stage ignition (when observed) and second stage ignition are captured along with information on the phasing of the heat release. Heat release processes during ignition are important because gasoline is known to exhibit low temperature heat release, intermediate temperature heat release and high temperature heat release. In an HCCI engine, the occurrence of low-temperature and intermediate-temperature heat release can be exploited to obtain higher load operation and has become a topic of much interest for engine researchers. Consequently, it is important to understand these processes under well-controlled conditions. A four-component gasoline surrogate model (including n-heptane, iso-octane, toluene, and 2-pentene) has been developed to simulate real gasolines. An appropriate surrogate mixture of the four components has been developed to simulate the specific gasoline used in the RCM experiments. This chemical kinetic surrogate model was then used to simulate the RCM experimental results for real gasoline. The experimental and modeling results covered ultra-lean to stoichiometric mixtures, compressed temperatures of 640-950 K, and compression pressures of 20 and 40 bar. The agreement between the experiments and model is encouraging in terms of first-stage (when observed) and second-stage ignition delay times and of heat release rate. The experimental and computational results are used to gain insight into low and intermediate temperature processes during gasoline ignition.« less
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
CO2 diffusion in champagne wines: a molecular dynamics study.
Perret, Alexandre; Bonhommeau, David A; Liger-Belair, Gérard; Cours, Thibaud; Alijah, Alexander
2014-02-20
Although diffusion is considered as the main physical process responsible for the nucleation and growth of carbon dioxide bubbles in sparkling beverages, the role of each type of molecule in the diffusion process remains unclear. In the present study, we have used the TIP5P and SPC/E water models to perform force field molecular dynamics simulations of CO2 molecules in water and in a water/ethanol mixture respecting Champagne wine proportions. CO2 diffusion coefficients were computed by applying the generalized Fick's law for the determination of multicomponent diffusion coefficients, a law that simplifies to the standard Fick's law in the case of champagnes. The CO2 diffusion coefficients obtained in pure water and water/ethanol mixtures composed of TIP5P water molecules were always found to exceed the coefficients obtained in mixtures composed of SPC/E water molecules, a trend that was attributed to a larger propensity of SPC/E water molecules to form hydrogen bonds. Despite the fact that the SPC/E model is more accurate than the TIP5P model to compute water self-diffusion and CO2 diffusion in pure water, the diffusion coefficients of CO2 molecules in the water/ethanol mixture are in much better agreement with the experimental values of 1.4 - 1.5 × 10(-9) m(2)/s obtained for Champagne wines when the TIP5P model is employed. This difference was deemed to rely on the larger propensity of SPC/E water molecules to maintain the hydrogen-bonded network between water molecules and form new hydrogen bonds with ethanol, although statistical issues cannot be completely excluded. The remarkable agreement between the theoretical CO2 diffusion coefficients obtained within the TIP5P water/ethanol mixture and the experimental data specific to Champagne wines makes us infer that the diffusion coefficient in these emblematic hydroalcoholic sparkling beverages is expected to remain roughly constant whathever their proportions in sugars, glycerol, or peptides.
NASA Astrophysics Data System (ADS)
Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.
2017-10-01
Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.
Uma, R N; Manjula, G; Meenambal, T
2007-04-01
The reaction rates and activation energy in aerobic composting processes for yard waste were determined using specifically designed reactors. Different mixture ratios were fixed before the commencement of the process. The C/N ratio was found to be optimum for a mixture ratio of 1:6 containing one part of coir pith to six parts of other waste which included yard waste, yeast sludge, poultry yard waste and decomposing culture (Pleurotosis). The path of stabilization of the wastes was continuously monitored by observing various parameters such as temperature, pH, Electrical Conductivity, C.O.D, VS at regular time intervals. Kinetic analysis was done to determine the reaction rates and activation energy for the optimum mixture ratio under forced aeration condition. The results of the analysis clearly indicated that the temperature dependence of the reaction rates followed the Arrhenius equation. The temperature coefficients were also determined. The degradation of the organic fraction of the yard waste could be predicted using first order reaction model.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
A bidimensional finite mixture model for longitudinal data subject to dropout.
Spagnoli, Alessandra; Marino, Maria Francesca; Alfò, Marco
2018-06-05
In longitudinal studies, subjects may be lost to follow up and, thus, present incomplete response sequences. When the mechanism underlying the dropout is nonignorable, we need to account for dependence between the longitudinal and the dropout process. We propose to model such a dependence through discrete latent effects, which are outcome-specific and account for heterogeneity in the univariate profiles. Dependence between profiles is introduced by using a probability matrix to describe the corresponding joint distribution. In this way, we separately model dependence within each outcome and dependence between outcomes. The major feature of this proposal, when compared with standard finite mixture models, is that it allows the nonignorable dropout model to properly nest its ignorable counterpart. We also discuss the use of an index of (local) sensitivity to nonignorability to investigate the effects that assumptions about the dropout process may have on model parameter estimates. The proposal is illustrated via the analysis of data from a longitudinal study on the dynamics of cognitive functioning in the elderly. Copyright © 2018 John Wiley & Sons, Ltd.
Modelling and calculation of flotation process in one-dimensional formulation
NASA Astrophysics Data System (ADS)
Amanbaev, Tulegen; Tilleuov, Gamidulla; Tulegenova, Bibigul
2016-08-01
In the framework of the assumptions of the mechanics of the multiphase media is constructed a mathematical model of the flotation process in the dispersed mixture of liquid, solid and gas phases, taking into account the degree of mineralization of the surface of the bubbles. Application of the constructed model is demonstrated on the example of one-dimensional stationary flotation and it is shown that the equations describing the process of ascent of the bubbles are singularly perturbed ("rigid"). The effect of size and concentration of bubbles and the volumetric content of dispersed particles on the flotation process are analyzed.
NASA Astrophysics Data System (ADS)
Zhang, Hui-Yong; Li, Jun-Ming; Sun, Ji-Liang; Wang, Bu-Xuan
2016-01-01
A theoretical model is developed for condensation heat transfer of binary refrigerant mixtures in mini-tubes with diameter about 1.0 mm. Condensation heat transfer of R410A and R32/R134a mixtures at different mass fluxes and saturated temperatures are analyzed, assuming that the phase flow pattern is annular flow. The results indicate that there exists a maximum interface temperature at the beginning of condensation process for azeotropic and zeotropic mixtures and the corresponding vapor quality to the maximum value increases with mass flux. The effects of mass flux, heat flux, surface tension and tube diameter are analyzed. As expected, the condensation heat transfer coefficients increase with mass flux and vapor quality, and increase faster in high vapor quality region. It is found that the effects of heat flux and surface tension are not so obvious as that of tube diameter. The characteristics of condensation heat transfer of zeotropic mixtures are consistent to those of azeotropic refrigerant mixtures. The condensation heat transfer coefficients increase with the concentration of the less volatile component in binary mixtures.
The GA sulfur-iodine water-splitting process - A status report
NASA Astrophysics Data System (ADS)
Besenbruch, G. E.; Chiger, H. D.; McCorkle, K. H.; Norman, J. H.; Rode, J. S.; Schuster, J. R.; Trester, P. W.
The development of a sulfur-iodine thermal water splitting cycle is described. The process features a 50% thermal efficiency, plus all liquid and gas handling. Basic chemical investigations comprised the development of multitemperature and multistage sulfuric acid boost reactors, defining the phase behavior of the HI/I2/H2O/H3PO4 mixtures, and development of a decomposition process for hydrogen iodide in the liquid phase. Initial process engineering studies have led to a 47% efficiency, improvements of 2% projected, followed by coupling high-temperature solar concentrators to the splitting processes to reduce power requirements. Conceptual flowsheets developed from bench models are provided; materials investigations have concentrated on candidates which can withstand corrosive mixtures at temperatures up to 400 deg K, with Hastelloy C-276 exhibiting the best properties for containment and heat exchange to I2.
The GA sulfur-iodine water-splitting process - A status report
NASA Technical Reports Server (NTRS)
Besenbruch, G. E.; Chiger, H. D.; Mccorkle, K. H.; Norman, J. H.; Rode, J. S.; Schuster, J. R.; Trester, P. W.
1981-01-01
The development of a sulfur-iodine thermal water splitting cycle is described. The process features a 50% thermal efficiency, plus all liquid and gas handling. Basic chemical investigations comprised the development of multitemperature and multistage sulfuric acid boost reactors, defining the phase behavior of the HI/I2/H2O/H3PO4 mixtures, and development of a decomposition process for hydrogen iodide in the liquid phase. Initial process engineering studies have led to a 47% efficiency, improvements of 2% projected, followed by coupling high-temperature solar concentrators to the splitting processes to reduce power requirements. Conceptual flowsheets developed from bench models are provided; materials investigations have concentrated on candidates which can withstand corrosive mixtures at temperatures up to 400 deg K, with Hastelloy C-276 exhibiting the best properties for containment and heat exchange to I2.
NASA Astrophysics Data System (ADS)
Giorgio, Ivan; Andreaus, Ugo; Madeo, Angela
2016-03-01
A model of a mixture of bone tissue and bioresorbable material with voids was used to numerically analyze the physiological balance between the processes of bone growth and resorption and artificial material resorption in a plate-like sample. The adopted model was derived from a theory for the behavior of porous solids in which the matrix material is linearly elastic and the interstices are void of material. The specimen—constituted by a region of bone living tissue and one of bioresorbable material—was acted by different in-plane loading conditions, namely pure bending and shear. Ranges of load magnitudes were identified within which physiological states become possible. Furthermore, the consequences of applying different loading conditions are examined at the end of the remodeling process. In particular, maximum value of bone and material mass densities, and extensions of the zones where bone is reconstructed were identified and compared in the two different load conditions. From the practical view point, during surgery planning and later rehabilitation, some choice of the following parameters is given: porosity of the graft, material characteristics of the graft, and adjustment of initial mixture tissue/bioresorbable material and later, during healing and remodeling, optimal loading conditions.
NASA Astrophysics Data System (ADS)
Diachkovskii, A. S.; Zykova, A. I.; Ishchenko, A. N.; Kasimov, V. Z.; Rogaev, K. S.; Sidorov, A. D.
2017-11-01
This paper describes a software package that allows to explore the interior ballistics processes occurring in a shot scheme with bulk charges using propellant pasty substances at various loading schemes, etc. As a mathematical model, a model of a polydisperse mixture of non-deformable particles and a carrier gas phase is used in the quasi-one-dimensional approximation. Writing the equations of the mathematical model allows to use it to describe a broad class of interior ballistics processes. Features of the using approach are illustrated by calculating the ignition period for the charge of tubular propellant.
A hydrodynamic model for granular material flows including segregation effects
NASA Astrophysics Data System (ADS)
Gilberg, Dominik; Klar, Axel; Steiner, Konrad
2017-06-01
The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Falchetto, Augusto Cannone; Moon, Ki Hoon; Wistuba, Michael P
2014-09-02
The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP), and, more recently, Recycled Asphalt Shingles (RAS) on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP) and modeling of rheological data obtained with the Bending Beam Rheometer (BBR). Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL) of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.
Cannone Falchetto, Augusto; Moon, Ki Hoon; Wistuba, Michael P.
2014-01-01
The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP), and, more recently, Recycled Asphalt Shingles (RAS) on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP) and modeling of rheological data obtained with the Bending Beam Rheometer (BBR). Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL) of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder. PMID:28788190
NASA Astrophysics Data System (ADS)
Walko, R. L.; Ashby, T.; Cotton, W. R.
2017-12-01
The fundamental role of atmospheric aerosols in the process of cloud droplet nucleation is well known, and there is ample evidence that the concentration, size, and chemistry of aerosols can strongly influence microphysical, thermodynamic, and ultimately dynamic properties and evolution of clouds and convective systems. With the increasing availability of observation- and model-based environmental representations of different types of anthropogenic and natural aerosols, there is increasing need for models to be able to represent which aerosols nucleate and which do not in supersaturated conditions. However, this is a very complex process that involves competition for water vapor between multiple aerosol species (chemistries) and different aerosol sizes within each species. Attempts have been made to parameterize the nucleation properties of mixtures of different aerosol species, but it is very difficult or impossible to represent all possible mixtures that may occur in practice. As part of a modeling study of the impact of anthropogenic and natural aerosols on hurricanes, we developed an ultra-efficient aerosol bin model to represent nucleation in a high-resolution atmospheric model that explicitly represents cloud- and subcloud-scale vertical motion. The bin model is activated at any time and location in a simulation where supersaturation occurs and is potentially capable of activating new cloud droplets. The bins are populated from the aerosol species that are present at the given time and location and by multiple sizes from each aerosol species according to a characteristic size distribution, and the chemistry of each species is represented by its absorption or adsorption characteristics. The bin model is integrated in time increments that are smaller than that of the atmospheric model in order to temporally resolve the peak supersaturation, which determines the total nucleated number. Even though on the order of 100 bins are typically utilized, this leads only to a 10 or 20% increase in overall computational cost due to the efficiency of the bin model. This method is highly versatile in that it automatically accommodates any possible number and mixture of different aerosol species. Applications of this model to simulations of Typhoon Nuri will be presented.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Biomedically relevant chemical and physical properties of coal combustion products.
Fisher, G L
1983-01-01
The evaluation of the potential public and occupational health hazards of developing and existing combustion processes requires a detailed understanding of the physical and chemical properties of effluents available for human and environmental exposures. These processes produce complex mixtures of gases and aerosols which may interact synergistically or antagonistically with biological systems. Because of the physicochemical complexity of the effluents, the biomedically relevant properties of these materials must be carefully assessed. Subsequent to release from combustion sources, environmental interactions further complicate assessment of the toxicity of combustion products. This report provides an overview of the biomedically relevant physical and chemical properties of coal fly ash. Coal fly ash is presented as a model complex mixture for health and safety evaluation of combustion processes. PMID:6337824
Component spectra extraction from terahertz measurements of unknown mixtures.
Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X
2015-10-20
The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.
Automated deconvolution of structured mixtures from heterogeneous tumor genomic data
Roman, Theodore; Xie, Lu
2017-01-01
With increasing appreciation for the extent and importance of intratumor heterogeneity, much attention in cancer research has focused on profiling heterogeneity on a single patient level. Although true single-cell genomic technologies are rapidly improving, they remain too noisy and costly at present for population-level studies. Bulk sequencing remains the standard for population-scale tumor genomics, creating a need for computational tools to separate contributions of multiple tumor clones and assorted stromal and infiltrating cell populations to pooled genomic data. All such methods are limited to coarse approximations of only a few cell subpopulations, however. In prior work, we demonstrated the feasibility of improving cell type deconvolution by taking advantage of substructure in genomic mixtures via a strategy called simplicial complex unmixing. We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data, as well as the ability to process quantitative RNA expression data, and heterogeneous combinations of RNA and CNV data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We further demonstrate their effectiveness in identifying mixture substructure in true breast cancer CNV data from the Cancer Genome Atlas (TCGA). Source code is available at https://github.com/tedroman/WSCUnmix PMID:29059177
Trends in long-period seismicity related to magmatic fluid compositions
Morrissey, M.M.; Chouet, B.A.
2001-01-01
Sound speeds and densities are calculated for three different types of fluids: gas-gas mixture; ash-gas mixture; and bubbly liquid. These fluid properties are used to calculate the impedance contrast (Z) and crack stiffness (C) in the fluid-driven crack model (Chouet: J. Geophys. Res., 91 (1986) 13,967; 101 (1988) 4375; A seismic model for the source of long-period events and harmonic tremor. In: Gasparini, P., Scarpa, R., Aki, K. (Eds.), Volcanic Seismology, IAVCEI Proceedings in Volcanology, Springer, Berlin, 3133). The fluid-driven crack model describes the far-field spectra of long-period (LP) events as modes of resonance of the crack. Results from our calculations demonstrate that ash-laden gas mixtures have fluid to solid density ratios comparable to, and fluid to solid velocity ratios lower than bubbly liquids (gas-volume fractions 20% gas-volume fraction yields values of Q-1r similar to those for a rectangular crack. As with gas-gas and ash-gas mixtures, an increase in mass fraction narrows the bandwidth of the dominant mode and shifts the spectra to lower frequencies. Including energy losses due to dissipative processes in a bubbly liquid increases attenuation. Attenuation may also be higher in ash-gas mixtures and foams if the effects of momentum and mass transfer between the phases were considered in the calculations. ?? 2001 Elsevier Science B. V. All rights reserved.
Development of a new continuous process for mixing of complex non-Newtonian fluids
NASA Astrophysics Data System (ADS)
Migliozzi, Simona; Mazzei, Luca; Sochon, Bob; Angeli, Panagiota; Thames Multiphase Team; Coral Project Collaboration
2017-11-01
Design of new continuous mixing operations poses many challenges, especially when dealing with highly viscous non-Newtonian fluids. Knowledge of complex rheological behaviour of the working mixture is crucial for development of an efficient process. In this work, we investigate the mixing performance of two different static mixers and the effects of the mixture rheology on the manufacturing of novel non-aqueous-based oral care products using experimental and computational fluid dynamic methods. The two liquid phases employed, i.e. a carbomer suspension in polyethylene glycol and glycerol, start to form a gel when they mix. We studied the structure evolution of the liquid mixture using time-resolved rheometry and we obtained viscosity rheograms at different phase ratios from pressure drop measurements in a customized mini-channel. The numerical results and rheological model were validated with experimental measurements carried out in a specifically designed setup. EPSRS-CORAL.
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
Cubarsi, R; Carrió, M M; Villaverde, A
2005-09-01
The in vivo proteolytic digestion of bacterial inclusion bodies (IBs) and the kinetic analysis of the resulting protein fragments is an interesting approach to investigate the molecular organization of these unconventional protein aggregates. In this work, we describe a set of mathematical instruments useful for such analysis and interpretation of observed data. These methods combine numerical estimation of digestion rate and approximation of its high-order derivatives, modelling of fragmentation events from a mixture of Poisson processes associated with differentiated protein species, differential equations techniques in order to estimate the mixture parameters, an iterative predictor-corrector algorithm for describing the flow diagram along the cascade process, as well as least squares procedures with minimum variance estimates. The models are formulated and compared with data, and successively refined to better match experimental observations. By applying such procedures as well as newer improved algorithms of formerly developed equations, it has been possible to model, for two kinds of bacterially produced aggregation prone recombinant proteins, their cascade digestion process that has revealed intriguing features of the IB-forming polypeptides.
Collective effects in models for interacting molecular motors and motor-microtubule mixtures
NASA Astrophysics Data System (ADS)
Menon, Gautam I.
2006-12-01
Three problems in the statistical mechanics of models for an assembly of molecular motors interacting with cytoskeletal filaments are reviewed. First, a description of the hydrodynamical behaviour of density-density correlations in fluctuating ratchet models for interacting molecular motors is outlined. Numerical evidence indicates that the scaling properties of dynamical behaviour in such models belong to the KPZ universality class. Second, the generalization of such models to include boundary injection and removal of motors is provided. In common with known results for the asymmetric exclusion processes, simulations indicate that such models exhibit sharp boundary driven phase transitions in the thermodynamic limit. In the third part of this paper, recent progress towards a continuum description of pattern formation in mixtures of motors and microtubules is described, and a non-equilibrium “phase-diagram” for such systems discussed.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
The structure of particle cloud premixed flames
NASA Technical Reports Server (NTRS)
Seshadri, K.; Berlad, A. L.
1992-01-01
The structure of premixed flames propagating in combustible systems containing uniformly distributed volatile fuel particles in an oxidizing gas mixture is analyzed. This analysis is motivated by experiments conducted at NASA Lewis Research Center on the structure of flames propagating in combustible mixtures of lycopodium particles and air. Several interesting modes of flame propagation were observed in these experiments depending on the number density and the initial size of the fuel particle. The experimental results show that steady flame propagation occurs even if the initial equivalence ratio of the combustible mixture based on the gaseous fuel available in the particles, phi sub u, is substantially larger than unity. A model is developed to explain these experimental observations. In the model, it is presumed that the fuel particles vaporize first to yield a gaseous fuel of known chemical composition which then reacts with oxygen in a one-step overall process. The activation energy of the chemical reaction is presumed to be large. The activation energy characterizing the kinetics of vaporization is also presumed to be large. The equations governing the structure of the flame were integrated numerically. It is shown that the interplay of vaporization kinetics and oxidation process can result in steady flame propagation in combustible mixtures where the value of phi sub u is substantially larger than unity. This prediction is in agreement with experimental observations.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Chemical kinetic models for combustion of hydrocarbons and formation of nitric oxide
NASA Technical Reports Server (NTRS)
Jachimowski, C. J.; Wilson, C. H.
1980-01-01
The formation of nitrogen oxides NOx during combustion of methane, propane, and a jet fuel, JP-4, was investigated in a jet stirred combustor. The results of the experiments were interpreted using reaction models in which the nitric oxide (NO) forming reactions were coupled to the appropriate hydrocarbon combustion reaction mechanisms. Comparison between the experimental data and the model predictions reveals that the CH + N2 reaction process has a significant effect on NO formation especially in stoichiometric and fuel rich mixtures. Reaction models were assembled that predicted nitric oxide levels that were in reasonable agreement with the jet stirred combustor data and with data obtained from a high pressure (5.9 atm (0.6 MPa)), prevaporized, premixed, flame tube type combustor. The results also suggested that the behavior of hydrocarbon mixtures, like JP-4, may not be significantly different from that of pure hydrocarbons. Application of the propane combustion and nitric oxide formation model to the analysis of NOx emission data reported for various aircraft gas turbines showed the contribution of the various nitric oxide forming processes to the total NOx formed.
Roush, W B; Boykin, D; Branton, S L
2004-08-01
A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.
Using the Gamma-Poisson Model to Predict Library Circulations.
ERIC Educational Resources Information Center
Burrell, Quentin L.
1990-01-01
Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)
USDA-ARS?s Scientific Manuscript database
Much processing of cotton fibrous materials accompanies heat treatments. Despite their critical influence on the properties of the material, the structural responses of cotton fiber to elevated temperatures remain uncertain. This study demonstrated that modeling the temperature dependence of the fib...
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J.; Lai, W.H.; Chung, K.
2008-08-15
Two sets of experiments were performed to achieve a strong overdriven state in a weaker mixture by propagating an overdriven detonation wave via a deflagration-to-detonation transition (DDT) process. First, preliminary experiments with a propane/oxygen mixture were used to evaluate the attenuation of the overdriven detonation wave in the DDT process. Next, experiments were performed wherein a propane/oxygen mixture was separated from a propane/air mixture by a thin diaphragm to observe the transmission of an overdriven detonation wave. Based on the characteristic relations, a simple wave intersection model was used to calculate the state of the transmitted detonation wave. The resultsmore » showed that a rarefaction effect must be included to ensure that there is no overestimate of the post-transmission wave properties when the incident detonation wave is overdriven. The strength of the incident overdriven detonation wave plays an important role in the wave transmission process. The experimental results showed that a transmitted overdriven detonation wave occurs instantaneously with a strong incident overdriven detonation wave. The near-CJ state of the incident wave leads to a transmitted shock wave, and then the transition to the overdriven detonation wave occurs downstream. The attenuation process for the overdriven detonation wave decaying to a near-CJ state occurs in all tests. After the attenuation process, an unstable detonation wave was observed in most tests. This may be attributed to the increase in the cell width in the attenuation process that exceeds the detonability cell width limit. (author)« less
Model of Fluidized Bed Containing Reacting Solids and Gases
NASA Technical Reports Server (NTRS)
Bellan, Josette; Lathouwers, Danny
2003-01-01
A mathematical model has been developed for describing the thermofluid dynamics of a dense, chemically reacting mixture of solid particles and gases. As used here, "dense" signifies having a large volume fraction of particles, as for example in a bubbling fluidized bed. The model is intended especially for application to fluidized beds that contain mixtures of carrier gases, biomass undergoing pyrolysis, and sand. So far, the design of fluidized beds and other gas/solid industrial processing equipment has been based on empirical correlations derived from laboratory- and pilot-scale units. The present mathematical model is a product of continuing efforts to develop a computational capability for optimizing the designs of fluidized beds and related equipment on the basis of first principles. Such a capability could eliminate the need for expensive, time-consuming predesign testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trushkin, A. N.; Kochetov, I. V.
The kinetic model of toluene decomposition in nonequilibrium low-temperature plasma generated by a pulse-periodic discharge operating in a mixture of nitrogen and oxygen is developed. The results of numerical simulation of plasma-chemical conversion of toluene are presented; the main processes responsible for C{sub 6}H{sub 5}CH{sub 3} decomposition are identified; the contribution of each process to total removal of toluene is determined; and the intermediate and final products of C{sub 6}H{sub 5}CH{sub 3} decomposition are identified. It was shown that toluene in pure nitrogen is mostly decomposed in its reactions with metastable N{sub 2}(A{sub 3}{Sigma}{sub u}{sup +}) and N{sub 2}(a Primemore » {sup 1}{Sigma}{sub u}{sup -}) molecules. In the presence of oxygen, in the N{sub 2} : O{sub 2} gas mixture, the largest contribution to C{sub 6}H{sub 5}CH{sub 3} removal is made by the hydroxyl radical OH which is generated in this mixture exclusively due to plasma-chemical reactions between toluene and oxygen decomposition products. Numerical simulation showed the existence of an optimum oxygen concentration in the mixture, at which toluene removal is maximum at a fixed energy deposition.« less
An Infinite Mixture Model for Coreference Resolution in Clinical Notes
Liu, Sijia; Liu, Hongfang; Chaudhary, Vipin; Li, Dingcheng
2016-01-01
It is widely acknowledged that natural language processing is indispensable to process electronic health records (EHRs). However, poor performance in relation detection tasks, such as coreference (linguistic expressions pertaining to the same entity/event) may affect the quality of EHR processing. Hence, there is a critical need to advance the research for relation detection from EHRs. Most of the clinical coreference resolution systems are based on either supervised machine learning or rule-based methods. The need for manually annotated corpus hampers the use of such system in large scale. In this paper, we present an infinite mixture model method using definite sampling to resolve coreferent relations among mentions in clinical notes. A similarity measure function is proposed to determine the coreferent relations. Our system achieved a 0.847 F-measure for i2b2 2011 coreference corpus. This promising results and the unsupervised nature make it possible to apply the system in big-data clinical setting. PMID:27595047
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P.; Alamdari, Houshang
2016-01-01
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch. PMID:28773459
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang
2016-05-04
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.
High-quality poly-dispersed mixtures applied in additive 3D technologies.
NASA Astrophysics Data System (ADS)
Gerasimov, M. D.; Brazhnik, Yu V.; Gorshkov, P. S.; Latyshev, S. S.
2018-03-01
The paper describes the new mixer design to obtain high-quality poly-dispersed powders applied in additive 3D technologies. It also considers a new mixing principle of dry powder particles ensuring the distribution of such particles in the total volume, which is close to ideal. The paper presents the mathematical model of mixer operation providing for the quality assessment of the ready mixtures. Besides, it demonstrates experimental results and obtained rational values of mixer process parameters.
Choi, Sun; Birarda, Giovanni
2017-08-03
During natural drying process, all solutions and suspensions tend to form the so-called "coffee-ring" deposits. This phenomenon, by far, has been interpreted by the hydrodynamics of evaporating fluids. However, in this study, by applying Fourier transform infrared imaging (FTIRI), it is possible to observe the segregation and separation of a protein mixture at the "ring", hence we suggest a new way to interpret "coffee-ring effect" of solutions. The results explore the dynamic process that leads to the ring formation in case of model plasma proteins, such as BGG (bovine γ globulin), BSA (bovine serum albumin), and Hfib (human fibrinogen), and also report fascinating discovery of the segregation at the ring deposits of two model proteins BGG and BSA, which can be explained by an energy kinetic model, only. The investigation suggests that the coffee-ring effect of solute in an evaporating solution drop is driven by an energy gradient created from change of particle-water-air interfacial energy configuration.
Tan, Shih-Wei; Lai, Shih-Wen
2012-01-01
Characterization and modeling of metal-semiconductor-metal (MSM) GaAs diodes using to evaporate SiO2 and Pd simultaneously as a mixture electrode (called M-MSM diodes) compared with similar to evaporate Pd as the electrode (called Pd-MSM diodes) were reported. The barrier height (φ b) and the Richardson constant (A*) were carried out for the thermionic-emission process to describe well the current transport for Pd-MSM diodes in the consideration of the carrier over the metal-semiconductor barrier. In addition, in the consideration of the carrier over both the metal-semiconductor barrier and the insulator-semiconductor barrier simultaneously, thus the thermionic-emission process can be used to describe well the current transport for M-MSM diodes. Furthermore, in the higher applied voltage, the carrier recombination will be taken into discussion. Besides, a composite-current (CC) model is developed to evidence the concepts. Our calculated results are in good agreement with the experimental ones. PMID:23226352
Direct regeneration of recycled cathode material mixture from scrapped LiFePO4 batteries
NASA Astrophysics Data System (ADS)
Li, Xuelei; Zhang, Jin; Song, Dawei; Song, Jishun; Zhang, Lianqi
2017-03-01
A new green recycling process (named as direct regeneration process) of cathode material mixture from scrapped LiFePO4 batteries is designed for the first time. Through this direct regeneration process, high purity cathode material mixture (LiFePO4 + acetylene black), anode material mixture (graphite + acetylene black) and other by-products (shell, Al foil, Cu foil and electrolyte solvent, etc.) are recycled from scrapped LiFePO4 batteries with high yield. Subsequently, recycled cathode material mixture without acid leaching is further directly regenerated with Li2CO3. Direct regeneration procedure of recycled cathode material mixture from 600 to 800 °C is investigated in detail. Cathode material mixture regenerated at 650 °C display excellent physical, chemical and electrochemical performances, which meet the reuse requirement for middle-end Li-ion batteries. The results indicate the green direct regeneration process with low-cost and high added-value is feasible.
Processing of odor mixtures in the zebrafish olfactory bulb.
Tabor, Rico; Yaksi, Emre; Weislogel, Jan-Marek; Friedrich, Rainer W
2004-07-21
Components of odor mixtures often are not perceived individually, suggesting that neural representations of mixtures are not simple combinations of the representations of the components. We studied odor responses to binary mixtures of amino acids and food extracts at different processing stages in the olfactory bulb (OB) of zebrafish. Odor-evoked input to the OB was measured by imaging Ca2+ signals in afferents to olfactory glomeruli. Activity patterns evoked by mixtures were predictable within narrow limits from the component patterns, indicating that mixture interactions in the peripheral olfactory system are weak. OB output neurons, the mitral cells (MCs), were recorded extra- and intracellularly and responded to odors with stimulus-dependent temporal firing rate modulations. Responses to mixtures of amino acids often were dominated by one of the component responses. Responses to mixtures of food extracts, in contrast, were more distinct from both component responses. These results show that mixture interactions can result from processing in the OB. Moreover, our data indicate that mixture interactions in the OB become more pronounced with increasing overlap of input activity patterns evoked by the components. Emerging from these results are rules of mixture interactions that may explain behavioral data and provide a basis for understanding the processing of natural odor stimuli in the OB.
Integrated Main Propulsion System Performance Reconstruction Process/Models
NASA Technical Reports Server (NTRS)
Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael
2013-01-01
The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Yuanjiang; Som, Sibendu; Pomraning, Eric
2015-12-01
An n-dodecane spray flame (Spray A from Engine Combustion Network) was simulated using a detailed combustion model along with a dynamic structure LES model to evaluate its performance at engine-relevant conditions and understand the transient behavior of this turbulent flame. The liquid spray was treated with a traditional Lagrangian method and the gas-phase reaction was modeled using a detailed combustion model. A 103-species skeletal mechanism was used for the n-dodecane chemical kinetic model. Significantly different flame structures and ignition processes are observed for the LES compared to those of RANS predictions. The LES data suggests that the first ignition initiatesmore » in lean mixture and propagates to rich mixture, and the main ignition happens in rich mixture, preferable less than 0.14 in mixture fraction space. LES was observed to have multiple ignition spots in the mixing layer simultaneously while the main ignition initiates in a clearly asymmetric fashion. The temporal flame development also indicates the flame stabilization mechanism is auto-ignition controlled and modulated by flame propagation. Soot predictions by LES present much better agreement with experiments compared to RANS both qualitatively and quantitatively. Multiple realizations for LES were performed to understand the realization to realization variation and to establish best practices for ensemble-averaging diesel spray flames. The relevance index analysis suggests that an average of 2 and 5 realizations can reach 99\\% of similarity to the target average of 16 realizations on the temperature and mixture fraction fields, respectively. However, more realizations are necessary for OH and soot mass fraction due to their high fluctuations.« less
Pei, Yuanjiang; Som, Sibendu; Pomraning, Eric; ...
2015-10-14
An n-dodecane spray flame (Spray A from Engine Combustion Network) was simulated using a δ function combustion model along with a dynamic structure large eddy simulation (LES) model to evaluate its performance at engine-relevant conditions and to understand the transient behavior of this turbulent flame. The liquid spray was treated with a traditional Lagrangian method and the gas-phase reaction was modeled using a δ function combustion model. A 103-species skeletal mechanism was used for the n-dodecane chemical kinetic model. Significantly different flame structures and ignition processes are observed for the LES compared to those of Reynolds-averaged Navier—Stokes (RANS) predictions. Themore » LES data suggests that the first ignition initiates in a lean mixture and propagates to a rich mixture, and the main ignition happens in the rich mixture, preferably less than 0.14 in mixture fraction space. LES was observed to have multiple ignition spots in the mixing layer simultaneously while the main ignition initiates in a clearly asymmetric fashion. The temporal flame development also indicates the flame stabilization mechanism is auto-ignition controlled. Soot predictions by LES present much better agreement with experiments compared to RANS, both qualitatively and quantitatively. Multiple realizations for LES were performed to understand the realization to realization variation and to establish best practices for ensemble-averaging diesel spray flames. The relevance index analysis suggests that an average of 5 and 6 realizations can reach 99% of similarity to the target average of 16 realizations on the mixture fraction and temperature fields, respectively. In conclusion, more realizations are necessary for the hydroxide (OH) and soot mass fractions due to their high fluctuations.« less
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Mollenhauer, Robert; Brewer, Shannon K.
2017-01-01
Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.
Process for producing an activated carbon adsorbent with integral heat transfer apparatus
NASA Technical Reports Server (NTRS)
Jones, Jack A. (Inventor); Yavrouian, Andre H. (Inventor)
1996-01-01
A process for producing an integral adsorbent-heat exchanger apparatus useful in ammonia refrigerant heat pump systems. In one embodiment, the process wets an activated carbon particles-solvent mixture with a binder-solvent mixture, presses the binder wetted activated carbon mixture on a metal tube surface and thereafter pyrolyzes the mixture to form a bonded activated carbon matrix adjoined to the tube surface. The integral apparatus can be easily and inexpensively produced by the process in large quantities.
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
Separation process using microchannel technology
Tonkovich, Anna Lee [Dublin, OH; Perry, Steven T [Galloway, OH; Arora, Ravi [Dublin, OH; Qiu, Dongming [Bothell, WA; Lamont, Michael Jay [Hilliard, OH; Burwell, Deanna [Cleveland Heights, OH; Dritz, Terence Andrew [Worthington, OH; McDaniel, Jeffrey S [Columbus, OH; Rogers, Jr; William, A [Marysville, OH; Silva, Laura J [Dublin, OH; Weidert, Daniel J [Lewis Center, OH; Simmons, Wayne W [Dublin, OH; Chadwell, G Bradley [Reynoldsburg, OH
2009-03-24
The disclosed invention relates to a process and apparatus for separating a first fluid from a fluid mixture comprising the first fluid. The process comprises: (A) flowing the fluid mixture into a microchannel separator in contact with a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the first fluid is sorbed by the sorption medium, removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing first fluid from the sorption medium and removing desorbed first fluid from the microchannel separator. The process and apparatus are suitable for separating nitrogen or methane from a fluid mixture comprising nitrogen and methane. The process and apparatus may be used for rejecting nitrogen in the upgrading of sub-quality methane.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Chemistry of the outer planets: Investigations of the chemical nature of the atmosphere of Titan
NASA Technical Reports Server (NTRS)
Scattergood, Thomas W.
1985-01-01
It is clear from the experiments that a variety of complex organic models can be produced by lightning in a Titan-like gas mixture. The dominant products were found to be acetylene and hydrogen cyanide, with smaller amounts of many other species. Any aerosol produced by lightning inititated process will consist of a complex mixture of organic compounds, many of which should easily be identified by pyrolytic gas chromatography. Work will continue to expand the data base of molecules produced by lightning and other processes in order to assist in the design of appropriate analytical instruments for the upcoming Saturn/Titan mission and any other planetary probes.
Mikolajczyk, Rafael T; Kauermann, Göran; Sagel, Ulrich; Kretzschmar, Mirjam
2009-08-01
Creation of a mixture model based on Poisson processes for assessment of the extent of cross-transmission of multidrug-resistant pathogens in the hospital. We propose a 2-component mixture of Poisson processes to describe the time series of detected cases of colonization. The first component describes the admission process of patients with colonization, and the second describes the cross-transmission. The data set used to illustrate the method consists of the routinely collected records for methicillin-resistant Staphylococcus aureus (MRSA), imipenem-resistant Pseudomonas aeruginosa, and multidrug-resistant Acinetobacter baumannii over a period of 3 years in a German tertiary care hospital. For MRSA and multidrug-resistant A. baumannii, cross-transmission was estimated to be responsible for more than 80% of cases; for imipenem-resistant P. aeruginosa, cross-transmission was estimated to be responsible for 59% of cases. For new cases observed within a window of less than 28 days for MRSA and multidrug-resistant A. baumannii or 40 days for imipenem-resistant P. aeruginosa, there was a 50% or greater probability that the cause was cross-transmission. The proposed method offers a solution to assessing of the extent of cross-transmission, which can be of clinical use. The method can be applied using freely available software (the package FlexMix in R) and it requires relatively little data.
NASA Astrophysics Data System (ADS)
Lee, H.-H.; Chen, S.-H.; Kleeman, M. J.; Zhang, H.; DeNero, S. P.; Joe, D. K.
2015-11-01
The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-dimensional chemical variable (X, Z, Y, Size Bins, Source Types, Species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and longwave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into CCN at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.
Mixedness determination of rare earth-doped ceramics
NASA Astrophysics Data System (ADS)
Czerepinski, Jennifer H.
The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.
Degradation of hydroxycinnamic acid mixtures in aqueous sucrose solutions by the Fenton process.
Nguyen, Danny M T; Zhang, Zhanying; Doherty, William O S
2015-02-11
The degradation efficiencies and behaviors of caffeic acid (CaA), p-coumaric acid (pCoA), and ferulic acid (FeA) in aqueous sucrose solutions containing the mixture of these hydroxycinnamic acids (HCAs) were studied by the Fenton oxidation process. Central composite design and multiresponse surface methodology were used to evaluate and optimize the interactive effects of process parameters. Four quadratic polynomial models were developed for the degradation of each individual acid in the mixture and the total HCAs degraded. Sucrose was the most influential parameter that significantly affected the total amount of HCA degraded. Under the conditions studied there was a <0.01% loss of sucrose in all reactions. The optimal values of the process parameters for a 200 mg/L HCA mixture in water (pH 4.73, 25.15 °C) and sucrose solution (13 mass %, pH 5.39, 35.98 °C) were 77% and 57%, respectively. Regression analysis showed goodness of fit between the experimental results and the predicted values. The degradation behavior of CaA differed from those of pCoA and FeA, where further CaA degradation is observed at increasing sucrose and decreasing solution pH. The differences (established using UV/vis and ATR-FTIR spectroscopy) were because, unlike the other acids, CaA formed a complex with Fe(III) or with Fe(III) hydrogen-bonded to sucrose and coprecipitated with lepidocrocite, an iron oxyhydroxide.
General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.
Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel
2015-04-07
This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.
NASA Astrophysics Data System (ADS)
Dang-Long, T.; Quang-Tuyen, T.; Shiratori, Y.
2016-06-01
Being produced from organic matters of wastes (bio-wastes) through a fermentation process, biogas mainly composed of CH4 and CO2 and can be considered as a secondary energy carrier derived from solar energy. To generate electricity from biogas through the electrochemical process in fuel cells is a state-of-the-art technology possessing higher energy conversion efficiency without harmful emissions compared to combustion process in heat engines. Getting benefits from high operating temperature such as direct internal reforming ability and activation of electrochemical reactions to increase overall system efficiency, solid oxide fuel cell (SOFC) system operated with biogas becomes a promising candidate for distributed power generator for rural applications leading to reductions of environmental issues caused by greenhouse effects and bio-wastes. CO2 reforming of CH4 and electrochemical oxidation of the produced syngas (H2-CO mixture) are two main reaction processes within porous anode material of SOFC. Here catalytic and electrochemical behavior of Ni-ScSZ (scandia stabilized-zirconia) anode in the feed of CH4-CO2 mixtures as simulated-biogas at 800 °C were evaluated. The results showed that CO2 had strong influences on both reaction processes. The increase in CO2 partial pressure resulted in the decrease in anode overvoltage, although open-circuit voltage was dropped. Besides that, the simulation result based on a power-law model for equimolar CH4-CO2 mixture revealed that coking hazard could be suppressed along the fuel flow channel in both open-circuit and closed-circuit conditions.
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
Wilde, Marcelo L; Schneider, Mandy; Kümmerer, Klaus
2017-04-01
Pharmaceuticals do not occur isolated in the environment but in multi-component mixtures and may exhibit antagonist, synergistic or additive behavior. Knowledge on this is still scarce. The situation is even more complicated if effluents or potable water is treated by oxidative processes or such transformations occur in the environment. Thus, determining the fate and effects of parent compounds, metabolites and transformation products (TPs) formed by transformation and degradation processes in the environment is needed. This study investigated the fate and preliminary ecotoxicity of the phenothiazine pharmaceuticals, Promazine (PRO), Promethazine (PRM), Chlorpromazine (CPR), and Thioridazine (THI) as single and as components of the resulting mixtures obtained from their treatment by Fenton process. The Fenton process was carried out at pH7 and by using 0.5-2mgL -1 of [Fe 2+ ] 0 and 1-12.5mgL -1 of [H 2 O 2 ] 0 at the fixed ratio [Fe 2+ ] 0 :[H 2 O 2 ] 0 of 1:10 (w:w). No complete mineralization was achieved. Constitutional isomers and some metabolite-like TPs formed were suggested based on their UHPLC-HRMS n data. A degradation pathway was proposed considering interconnected mechanisms such as sulfoxidation, hydroxylation, N-dealkylation, and dechlorination steps. Aerobic biodegradation tests (OECD 301 D and OECD 301 F) were applied to the parent compounds separately, to the mixture of parent compounds, and for the cocktail of TPs present after the treatment by Fenton process. The samples were not readily biodegradable. However, LC-MS analysis revealed that abiotic transformations, such hydrolysis, and autocatalytic transformations occurred. The initial ecotoxicity tested towards Vibrio fischeri as individual compounds featured a reduction in toxicity of PRM and CPR by the treatment process, whereas PRO showed an increase in acute luminescence inhibition and THI a stable luminescence inhibition. Concerning effects of the mixture components, reduction in toxicity by the Fenton process was predicted by concentration addition and independent action models. Copyright © 2017 Elsevier B.V. All rights reserved.
Contaminant source identification using semi-supervised machine learning
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel
2018-05-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).
Contaminant source identification using semi-supervised machine learning
Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan
2017-11-08
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less
Contaminant source identification using semi-supervised machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.
In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less
Cabasso, Israel; Korngold, Emmanuel
1988-01-01
A membrane permeation process for dehydrating a mixture of organic liquids, such as alcohols or close boiling, heat sensitive mixtures. The process comprises causing a component of the mixture to selectively sorb into one side of sulfonated ion-exchange polyalkene (e.g., polyethylene) membranes and selectively diffuse or flow therethrough, and then desorbing the component into a gas or liquid phase on the other side of the membranes.
Loman, Abdullah Al; Ju, Lu-Kwang
2016-05-01
Soy protein is a well-known nutritional supplement in proteinaceous food and animal feed. However, soybeans contain complex carbohydrate. Selective carbohydrate removal by enzymes could increase the protein content and remove the indigestibility of soy products for inclusion in animal feed. Complete hydrolysis of soy flour carbohydrates is challenging due to the presence of proteins and different types of non-structural polysaccharides. This study is designed to guide complex enzyme mixture required for hydrolysis of all types of soy flour carbohydrates. Enzyme broths from Aspergillus niger, Aspergillus aculeatus and Trichoderma reesei fermentations were evaluated in this study for soy carbohydrate hydrolysis. The resultant hydrolysate was measured for solubilized carbohydrate by both total carbohydrate and reducing sugar analyses. Conversion data attained after 48h hydrolysis were first fitted with models to determine the maximum fractions of carbohydrate hydrolyzable by each enzyme group, i.e., cellulase, xylanase, pectinase and α-galactosidase. Kinetic models were then developed to describe the increasing conversions over time under different enzyme activities and process conditions. The models showed high fidelity in predicting soy carbohydrate hydrolysis over broad ranges of soy flour loading (5-25%) and enzyme activities: per g soy flour, cellulase, 0.04-30 FPU; xylanase, 3.5-618U; pectinase, 0.03-120U; and α-galactosidase, 0.01-60U. The models are valuable in guiding the development and production of optimal enzyme mixtures toward hydrolysis of all types of carbohydrates present in soy flour and in optimizing the design and operation of hydrolysis reactor and process. Copyright © 2016 Elsevier Inc. All rights reserved.
Multiscale Constitutive Modeling of Asphalt Concrete
NASA Astrophysics Data System (ADS)
Underwood, Benjamin Shane
Multiscale modeling of asphalt concrete has become a popular technique for gaining improved insight into the physical mechanisms that affect the material's behavior and ultimately its performance. This type of modeling considers asphalt concrete, not as a homogeneous mass, but rather as an assemblage of materials at different characteristic length scales. For proper modeling these characteristic scales should be functionally definable and should have known properties. Thus far, research in this area has not focused significant attention on functionally defining what the characteristic scales within asphalt concrete should be. Instead, many have made assumptions on the characteristic scales and even the characteristic behaviors of these scales with little to no support. This research addresses these shortcomings by directly evaluating the microstructure of the material and uses these results to create materials of different characteristic length scales as they exist within the asphalt concrete mixture. The objectives of this work are to; 1) develop mechanistic models for the linear viscoelastic (LVE) and damage behaviors in asphalt concrete at different length scales and 2) develop a mechanistic, mechanistic/empirical, or phenomenological formulation to link the different length scales into a model capable of predicting the effects of microstructural changes on the linear viscoelastic behaviors of asphalt concrete mixture, e.g., a microstructure association model for asphalt concrete mixture. Through the microstructural study it is found that asphalt concrete mixture can be considered as a build-up of three different phases; asphalt mastic, fine aggregate matrix (FAM), and finally the coarse aggregate particles. The asphalt mastic is found to exist as a homogenous material throughout the mixture and FAM, and the filler content within this material is consistent with the volumetric averaged concentration, which can be calculated from the job mix formula. It is also found that the maximum aggregate size of the FAM is mixture dependent, but consistent with a gradation parameter from the Baily Method of mixture design. Mechanistic modeling of these different length scales reveals that although many consider asphalt concrete to be a LVE material, it is in fact only quasi-LVE because it shows some tendencies that are inconsistent with LVE theory. Asphalt FAM and asphalt mastic show similar nonlinear tendencies although the exact magnitude of the effect differs. These tendencies can be ignored for damage modeling in the mixture and FAM scales as long as the effects are consistently ignored, but it is found that they must be accounted for in mastic and binder damage modeling. The viscoelastic continuum damage (VECD) model is used for damage modeling in this research. To aid in characterization and application of the VECD model for cyclic testing, a simplified version (S-VECD) is rigorously derived and verified. Through the modeling efforts at each scale, various factors affecting the fundamental and engineering properties at each scale are observed and documented. A microstructure association model that accounts for particle interaction through physico-chemical processes and the effects of aggregate structuralization is developed to links the moduli at each scale. This model is shown to be capable of upscaling the mixture modulus from either the experimentally determined mastic modulus or FAM modulus. Finally, an initial attempt at upscaling the damage and nonlinearity phenomenon is shown.
Binary gas mixture adsorption-induced deformation of microporous carbons by Monte Carlo simulation.
Cornette, Valeria; de Oliveira, J C Alexandre; Yelpo, Víctor; Azevedo, Diana; López, Raúl H
2018-07-15
Considering the thermodynamic grand potential for more than one adsorbate in an isothermal system, we generalize the model of adsorption-induced deformation of microporous carbons developed by Kowalczyk et al. [1]. We report a comprehensive study of the effects of adsorption-induced deformation of carbonaceous amorphous porous materials due to adsorption of carbon dioxide, methane and their mixtures. The adsorption process is simulated by using the Grand Canonical Monte Carlo (GCMC) method and the calculations are then used to analyze experimental isotherms for the pure gases and mixtures with different molar fraction in the gas phase. The pore size distribution determined from an experimental isotherm is used for predicting the adsorption-induced deformation of both pure gases and their mixtures. The volumetric strain (ε) predictions from the GCMC method are compared against relevant experiments with good agreement found in the cases of pure gases. Copyright © 2018 Elsevier Inc. All rights reserved.
Detailed finite element method modeling of evaporating multi-component droplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diddens, Christian, E-mail: C.Diddens@tue.nl
The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet.more » Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.« less
The mechanics of cellular compartmentalization as a model for tumor spreading
NASA Astrophysics Data System (ADS)
Fritsch, Anatol; Pawlizak, Steve; Zink, Mareike; Kaes, Josef A.
2012-02-01
Based on a recently developed surgical method of Michael H"ockel, which makes use of cellular confinement to compartments in the human body, we study the mechanics of the process of cell segregation. Compartmentalization is a fundamental process of cellular organization and occurs during embryonic development. A simple model system can demonstrate the process of compartmentalization: When two populations of suspended cells are mixed, this mixture will eventually segregate into two phases, whereas mixtures of the same cell type will not. In the 1960s, Malcolm S. Steinberg formulated the so-called differential adhesion hypothesis which explains the segregation in the model system and the process of compartmentalization by differences in surface tension and adhesiveness of the interacting cells. We are interested in to which extend the same physical principles affect tumor growth and spreading between compartments. For our studies, we use healthy and cancerous breast cell lines of different malignancy as well as primary cells from human cervix carcinoma. We apply a set of techniques to study their mechanical properties and interactions. The Optical Stretcher is used for whole cell rheology, while Cell-cell-adhesion forces are directly measured with a modified AFM. In combination with 3D segregation experiments in droplet cultures we try to clarify the role of surface tension in tumor spreading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopčić, Nina, E-mail: nkopcic@fkit.hr; Vuković Domanovac, Marija; Kučić, Dajana
Highlights: • Apple and tobacco waste mixture was efficiently composted during 22 days. • Physical–chemical and microbiological properties of the mixture were suitable the process. • Evaluation of selected mathematical model showed good prediction of the temperature. • The temperature curve was a “mirror image” of the oxygen concentration curve. • The peak values of the temperature were occurred 9.5 h after the peak oxygen consumption. - Abstract: Efficient composting process requires set of adequate parameters among which physical–chemical properties of the composting substrate play the key-role. Combining different types of biodegradable solid waste it is possible to obtain amore » substrate eligible to microorganisms in the composting process. In this work the composting of apple and tobacco solid waste mixture (1:7, dry weight) was explored. The aim of the work was to investigate an efficiency of biodegradation of the given mixture and to characterize incurred raw compost. Composting was conducted in 24 L thermally insulated column reactor at airflow rate of 1.1 L min{sup −1}. During 22 days several parameters were closely monitored: temperature and mass of the substrate, volatile solids content, C/N ratio and pH-value of the mixture and oxygen consumption. The composting of the apple and tobacco waste resulted with high degradation of the volatile solids (53.1%). During the experiment 1.76 kg of oxygen was consumed and the C/N ratio of the product was 11.6. The obtained temperature curve was almost a “mirror image” of the oxygen concentration curve while the peak values of the temperature were occurred 9.5 h after the peak oxygen consumption.« less
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
The effect of air entrapment on the performance of squeeze film dampers: Experiments and analysis
NASA Astrophysics Data System (ADS)
Diaz Briceno, Sergio Enrique
Squeeze film dampers (SFDs) are an effective means to introduce the required damping in rotor-bearing systems. They are a standard application in jet engines and are commonly used in industrial compressors. Yet, lack of understanding of their operation has confined the design of SFDs to a costly trial and error process based on prior experience. The main factor deterring the success of analytical models for the prediction of SFDs' performance lays on the modeling of the dynamic film rupture. Usually, the cavitation models developed for journal bearings are applied to SFDs. Yet, the characteristic motion of the SFD results in the entrapment of air into the oil film, thus producing a bubbly mixture that can not be represented by these models. In this work, an extensive experimental study establishes qualitatively and---for the first time---quantitatively the differences between operation with vapor cavitation and with air entrainment. The experiments show that most operating conditions lead to air entrainment and demonstrate the paramount effect it has on the performance of SFDs, evidencing the limitation of currently available models. Further experiments address the operation of SFDs with controlled bubbly mixtures. These experiments bolster the possibility of modeling air entrapment by representing the lubricant as a homogeneous mixture of air and oil and provide a reliable data base for benchmarking such a model. An analytical model is developed based on a homogeneous mixture assumption and where the bubbles are described by the Rayleigh-Plesset equation. Good agreement is obtained between this model and the measurements performed in the SFD operating with controlled mixtures. A complementary analytical model is devised to estimate the amount of air entrained from the balance of axial flows in the film. A combination of the analytical models for prediction of the air volume fraction and of the hydrodynamic pressures renders promising results for prediction of the performance of SFDs with freely entrained air. The results of this work are of immediate engineering applicability. Furthermore, they represent a firm step to advance the understanding on the effects of air entrapment in the performance of SFD.
Understanding the ignition mechanism of high-pressure spray flames
Dahms, Rainer N.; Paczko, Günter A.; Skeen, Scott A.; ...
2016-10-25
A conceptual model for turbulent ignition in high-pressure spray flames is presented. The model is motivated by first-principles simulations and optical diagnostics applied to the Sandia n-dodecane experiment. The Lagrangian flamelet equations are combined with full LLNL kinetics (2755 species; 11,173 reactions) to resolve all time and length scales and chemical pathways of the ignition process at engine-relevant pressures and turbulence intensities unattainable using classic DNS. The first-principles value of the flamelet equations is established by a novel chemical explosive mode-diffusion time scale analysis of the fully-coupled chemical and turbulent time scales. Contrary to conventional wisdom, this analysis reveals thatmore » the high Damköhler number limit, a key requirement for the validity of the flamelet derivation from the reactive Navier–Stokes equations, applies during the entire ignition process. Corroborating Rayleigh-scattering and formaldehyde PLIF with simultaneous schlieren imaging of mixing and combustion are presented. Our combined analysis establishes a characteristic temporal evolution of the ignition process. First, a localized first-stage ignition event consistently occurs in highest temperature mixture regions. This initiates, owed to the intense scalar dissipation, a turbulent cool flame wave propagating from this ignition spot through the entire flow field. This wave significantly decreases the ignition delay of lower temperature mixture regions in comparison to their homogeneous reference. This explains the experimentally observed formaldehyde formation across the entire spray head prior to high-temperature ignition which consistently occurs first in a broad range of rich mixture regions. There, the combination of first-stage ignition delay, shortened by the cool flame wave, and the subsequent delay until second-stage ignition becomes minimal. A turbulent flame subsequently propagates rapidly through the entire mixture over time scales consistent with experimental observations. As a result, we demonstrate that the neglect of turbulence-chemistry-interactions fundamentally fails to capture the key features of this ignition process.« less
Odourant dominance in olfactory mixture processing: what makes a strong odourant?
Schubert, Marco; Sandoz, Jean-Christophe; Galizia, Giovanni; Giurfa, Martin
2015-01-01
The question of how animals process stimulus mixtures remains controversial as opposing views propose that mixtures are processed analytically, as the sum of their elements, or holistically, as unique entities different from their elements. Overshadowing is a widespread phenomenon that can help decide between these alternatives. In overshadowing, an individual trained with a binary mixture learns one element better at the expense of the other. Although element salience (learning success) has been suggested as a main explanation for overshadowing, the mechanisms underlying this phenomenon remain unclear. We studied olfactory overshadowing in honeybees to uncover the mechanisms underlying olfactory-mixture processing. We provide, to our knowledge, the most comprehensive dataset on overshadowing to date based on 90 experimental groups involving more than 2700 bees trained either with six odourants or with their resulting 15 binary mixtures. We found that bees process olfactory mixtures analytically and that salience alone cannot predict overshadowing. After normalizing learning success, we found that an unexpected feature, the generalization profile of an odourant, was determinant for overshadowing. Odourants that induced less generalization enhanced their distinctiveness and became dominant in the mixture. Our study thus uncovers features that determine odourant dominance within olfactory mixtures and allows the referring of this phenomenon to differences in neural activity both at the receptor and the central level in the insect nervous system. PMID:25652840
Controllability of control and mixture weakly dependent siphons in S3PR
NASA Astrophysics Data System (ADS)
Hong, Liang; Chao, Daniel Y.
2013-08-01
Deadlocks in a flexible manufacturing system modelled by Petri nets arise from insufficiently marked siphons. Monitors are added to control these siphons to avoid deadlocks rendering the system too complicated since the total number of monitors grows exponentially. Li and Zhou propose to add monitors only to elementary siphons while controlling the other (strongly or weakly) dependent siphons by adjusting control depth variables. To avoid generating new siphons, the control arcs are ended at source transitions of process nets. This disturbs the original model more and hence loses more live states. Negative terms in the controllability make the control policy for weakly dependent siphons rather conservative. We studied earlier on the controllability of strongly dependent siphons and proposed to add monitors in the order of basic, compound, control, partial mixture and full mixture (strongly dependent) siphons to reduce the number of mixed integer programming iterations and redundant monitors. This article further investigates the controllability of siphons derived from weakly 2-compound siphons. We discover that the controllability for weakly and strongly compound siphons is similar. It no longer holds for control and mixture siphons. Some control and mixture siphons, derived from strongly 2-compound siphons are not redundant - no longer so for those derived from weakly 2-compound siphons; that is all control and mixture siphons are redundant. They do not need to be the conservative one as proposed by Li and Zhou. Thus, we can adopt the maximally permissive control policy even though new siphons are generated.
Daniels, Carter W; Sanabria, Federico
2017-03-01
The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.
On the characterization of flowering curves using Gaussian mixture models.
Proïa, Frédéric; Pernet, Alix; Thouroude, Tatiana; Michel, Gilles; Clotault, Jérémy
2016-08-07
In this paper, we develop a statistical methodology applied to the characterization of flowering curves using Gaussian mixture models. Our study relies on a set of rosebushes flowering data, and Gaussian mixture models are mainly used to quantify the reblooming properties of each one. In this regard, we also suggest our own selection criterion to take into account the lack of symmetry of most of the flowering curves. Three classes are created on the basis of a principal component analysis conducted on a set of reblooming indicators, and a subclassification is made using a longitudinal k-means algorithm which also highlights the role played by the precocity of the flowering. In this way, we obtain an overview of the correlations between the features we decided to retain on each curve. In particular, results suggest the lack of correlation between reblooming and flowering precocity. The pertinent indicators obtained in this study will be a first step towards the comprehension of the environmental and genetic control of these biological processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Style consistent classification of isogenous patterns.
Sarkar, Prateek; Nagy, George
2005-01-01
In many applications of pattern recognition, patterns appear together in groups (fields) that have a common origin. For example, a printed word is usually a field of character patterns printed in the same font. A common origin induces consistency of style in features measured on patterns. The features of patterns co-occurring in a field are statistically dependent because they share the same, albeit unknown, style. Style constrained classifiers achieve higher classification accuracy by modeling such dependence among patterns in a field. Effects of style consistency on the distributions of field-features (concatenation of pattern features) can be modeled by hierarchical mixtures. Each field derives from a mixture of styles, while, within a field, a pattern derives from a class-style conditional mixture of Gaussians. Based on this model, an optimal style constrained classifier processes entire fields of patterns rendered in a consistent but unknown style. In a laboratory experiment, style constrained classification reduced errors on fields of printed digits by nearly 25 percent over singlet classifiers. Longer fields favor our classification method because they furnish more information about the underlying style.
High affinity ligands from in vitro selection: Complex targets
Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry
1998-01-01
Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188
Self-ignition of S.I. engine model fuels: A shock tube investigation at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fieweger, K.; Blumenthal, R.; Adomeit, G.
1997-06-01
The self-ignition of several spark-ignition (SI) engine fuels (iso-octane, methanol, methyl tert-butyl ether and three different mixtures of iso-octane and n-heptane), mixed with air, was investigated experimentally under relevant engine conditions by the shock tube technique. Typical modes of the self-ignition process were registered cinematographically. For temperatures relevant to piston engine combustion, the self-ignition process always starts as an inhomogeneous, deflagrative mild ignition. This instant is defined by the ignition delay time, {tau}{sub defl}. The deflagration process in most cases is followed by a secondary explosion (DDT). This transition defines a second ignition delay time, {tau}{sub DDT}, which is amore » suitable approximation for the chemical ignition delay time, if the change of the thermodynamic conditions of the unburned test gas due to deflagration is taken into account. For iso-octane at p = 40 bar, a NTC (negative temperature coefficient), behavior connected with a two step (cool flame) self-ignition at low temperatures was observed. This process was very pronounced for rich and less pronounced for stoichiometric mixtures. The results of the {tau}{sub DDT} delays of the stoichiometric mixtures were shortened by the primary deflagration process in the temperature range between 800 and 1,000 K. Various mixtures of iso-octane and n-heptane were investigated. The results show a strong influence of the n-heptane fraction in the mixture, both on the ignition delay time and on the mode of self-ignition. The self-ignition of methanol and MTBE (methyl tert-butyl ether) is characterized by a very pronounced initial deflagration. For temperatures below 900 K (methanol: 800 K), no secondary explosion occurs. Taking into account the pressure increase due to deflagration, the measured delays {tau}{sub DDT} of the secondary explosion are shortened by up to one order of magnitude.« less
NASA Astrophysics Data System (ADS)
Tikhomirov, S. G.; Pyatakov, Y. V.; Karmanova, O. V.; Maslov, A. A.
2018-03-01
The studies of the vulcanization kinetics of elastomers were carried out using a Truck tyre tread rubber compound. The formal kinetic scheme of vulcanization of rubbers sulfur-accelerator curing system was used which generalizes the set of reactions occurring in the curing process. A mathematical model is developed for determining the thermal parameters vulcanizable mixture comprising algorithms for solving direct and inverse problems for system of equations of heat conduction and kinetics of the curing process. The performance of the model is confirmed by the results of numerical experiments on model examples.
Pore-scale modeling of phase change in porous media
NASA Astrophysics Data System (ADS)
Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing
2017-11-01
One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.
Supercritical separation process for complex organic mixtures
Chum, Helena L.; Filardo, Giuseppe
1990-01-01
A process is disclosed for separating low molecular weight components from complex aqueous organic mixtures. The process includes preparing a separation solution of supercritical carbon dioxide with an effective amount of an entrainer to modify the solvation power of the supercritical carbon dioxide and extract preselected low molecular weight components. The separation solution is maintained at a temperature of at least about 70.degree. C. and a pressure of at least about 1,500 psi. The separation solution is then contacted with the organic mixtures while maintaining the temperature and pressure as above until the mixtures and solution reach equilibrium to extract the preselected low molecular weight components from the organic mixtures. Finally, the entrainer/extracted components portion of the equilibrium mixture is isolated from the separation solution.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
Vidal, T; Gigot, C; de Vallavieille-Pope, C; Huber, L; Saint-Jean, S
2018-06-08
Growing cultivars differing by their disease resistance level together (cultivar mixtures) can reduce the propagation of diseases. Although architectural characteristics of cultivars are little considered in mixture design, they could have an effect on disease, in particular through spore dispersal by rain splash, which occurs over short distances. The objective of this work was to assess the impact of plant height of wheat cultivars in mixtures on splash dispersal of Zymoseptoria tritici, which causes septoria tritici leaf blotch. We used a modelling approach involving an explicit description of canopy architecture and splash dispersal processes. The dispersal model computed raindrop interception by a virtual canopy as well as the production, transport and interception of splash droplets carrying inoculum. We designed 3-D virtual canopies composed of susceptible and resistant plants, according to field measurements at the flowering stage. In numerical experiments, we tested different heights of virtual cultivars making up binary mixtures to assess the influence of this architectural trait on dispersal patterns of spore-carrying droplets. Inoculum interception decreased exponentially with the height relative to the main inoculum source (lower diseased leaves of susceptible plants), and little inoculum was intercepted further than 40 cm above the inoculum source. Consequently, tall plants intercepted less inoculum than smaller ones. Plants with twice the standard height intercepted 33 % less inoculum than standard height plants. In cases when the height of suscpeptible plants was doubled, inoculum interception by resistant leaves was 40 % higher. This physical barrier to spore-carrying droplet trajectories reduced inoculum interception by tall susceptible plants and was modulated by plant height differences between cultivars of a binary mixture. These results suggest that mixture effects on spore dispersal could be modulated by an adequate choice of architectural characteristics of cultivars. In particular, even small differences in plant height could reduce spore dispersal.
Solubility and Phase Behavior of CL20 and RDX in Supercritical Carbon Dioxide
2004-12-01
with Enhanced mass transfer (SAS-EMTM) are potential green processes for producing ultrafine particles . In these processes, the material to be...particulated will be dissolved (solubilized) into an environmentally benign solvent such as supercritical carbon dioxide and then condensed to ultrafine ... particles by reducing the pressure and temperature of the mixture. Theoretical and/or predictive models are required for process simulation and to
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Mathematical modeling of a radio-frequency path for IEEE 802.11ah based wireless sensor networks
NASA Astrophysics Data System (ADS)
Tyshchenko, Igor; Cherepanov, Alexander; Dmitrii, Vakhnin; Popova, Mariia
2017-09-01
This article discusses the process of creating the mathematical model of a radio-frequency path for an IEEE 802.11ah based wireless sensor networks using M atLab Simulink CAD tools. In addition, it describes occurring perturbing effects and determining the presence of a useful signal in the received mixture.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
Lubrication model for evaporation of binary sessile drops
NASA Astrophysics Data System (ADS)
Williams, Adam; Sáenz, Pedro; Karapetsas, George; Matar, Omar; Sefiane, Khellil; Valluri, Prashant
2017-11-01
Evaporation of a binary mixture sessile drop from a solid substrate is a highly dynamic and complex process with flow driven both thermal and solutal Marangoni stresses. Experiments on ethanol/water drops have identified chaotic regimes on both the surface and interior of the droplet, while mixture composition has also been seen to govern drop wettability. Using a lubrication-type approach, we present a finite element model for the evaporation of an axisymmetric binary drop deposited on a heated substrate. We consider a thin drop with a moving contact line, taking also into account the commonly ignored effects of inertia which drives interfacial instability. We derive evolution equations for the film height, the temperature and the concentration field considering that the mixture comprises two ideally mixed volatile components with a surface tension linearly dependent on both temperature and concentration. The properties of the mixture such as viscosity also vary locally with concentration. We explore the parameter space to examine the resultant effects on wetting and evaporation where we find qualitative agreement with experiments in both these areas. This enables us to understand the nature of the instabilities that spontaneously emerge over the drop lifetime. EPSRC - EP/K00963X/1.
Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D
2018-06-01
The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.
Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric
2007-01-01
A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926
NASA Astrophysics Data System (ADS)
Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.
2016-08-01
Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda
2014-12-19
The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less
Modeling of First-Passage Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico
2010-03-01
In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI
Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.
2016-01-01
We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524
Case Studies in Modelling, Control in Food Processes.
Glassey, J; Barone, A; Montague, G A; Sabou, V
This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.
Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A
2009-08-15
The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.
Parametric identification of the process of preparing ceramic mixture as an object of control
NASA Astrophysics Data System (ADS)
Galitskov, Stanislav; Nazarov, Maxim; Galitskov, Konstantin
2017-10-01
Manufacture of ceramic materials and products largely depends on the preparation of clay raw materials. The main process here is the process of mixing, which in industrial production is mostly done in cross-compound clay mixers of continuous operation with steam humidification. The authors identified features of dynamics of this technological stage, which in itself is a non-linear control object with distributed parameters. When solving practical tasks for automation of a certain class of ceramic materials production it is important to make parametric identification of moving clay. In this paper the task is solved with the use of computational models, approximated to a particular section of a clay mixer along its length. The research introduces a methodology of computational experiments as applied to the designed computational model. Parametric identification of dynamic links was carried out according to transient characteristics. The experiments showed that the control object in question is to a great extent a non-stationary one. The obtained results are problematically oriented on synthesizing a multidimensional automatic control system for preparation of ceramic mixture with specified values of humidity and temperature exposed to the technological process of major disturbances.
Monitoring and modeling of ultrasonic wave propagation in crystallizing mixtures
NASA Astrophysics Data System (ADS)
Marshall, T.; Challis, R. E.; Tebbutt, J. S.
2002-05-01
The utility of ultrasonic compression wave techniques for monitoring crystallization processes is investigated in a study of the seeded crystallization of copper II sulfate pentahydrate from aqueous solution. Simple models are applied to predict crystal yield, crystal size distribution and the changing nature of the continuous phase. A scattering model is used to predict the ultrasonic attenuation as crystallization proceeds. Experiments confirm that modeled attenuation is in agreement with measured results.
Theoretical Thermodynamics of Mixtures at High Pressures
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1985-01-01
The development of an understanding of the chemistry of mixtures of metallic hydrogen and abundant, higher-z material such as oxygen, carbon, etc., is important for understanding of fundamental processes of energy release, differentiation, and development of atmospheric abundances in the Jovian planets. It provides a significant theoretical base for the interpretation of atmospheric elemental abundances to be provided by atmospheric entry probes in coming years. Significant differences are found when non-perturbative approaches such as Thomas-Fermi-Dirac (TFD) theory are used. Mapping of the phase diagrams of such binary mixtures in the pressure range from approx. 10 Mbar to approx. 1000 Mbar, using results from three-dimensional TFD calculations is undertaken. Derivation of a general and flexible thermodynamic model for such binary mixtures in the relevant pressure range was facilitated by the following breakthrough: there exists an accurate nd fairly simple thermodynamic representation of a liquid two-component plasma (TCP) in which the Helmholtz free energy is represented as a suitable linear combination of terms dependent only on density and terms which depend only on the ion coupling parameter. It is found that the crystal energies of mixtures of H-He, H-C, and H-O can be satisfactorily reproduced by the same type of model, except that an effective, density-dependent ionic charge must be used in place of the actual total ionic charge.
Supercritical separation process for complex organic mixtures
Chum, H.L.; Filardo, G.
1990-10-23
A process is disclosed for separating low molecular weight components from complex aqueous organic mixtures. The process includes preparing a separation solution of supercritical carbon dioxide with an effective amount of an entrainer to modify the solvation power of the supercritical carbon dioxide and extract preselected low molecular weight components. The separation solution is maintained at a temperature of at least about 70 C and a pressure of at least about 1,500 psi. The separation solution is then contacted with the organic mixtures while maintaining the temperature and pressure as above until the mixtures and solution reach equilibrium to extract the preselected low molecular weight components from the organic mixtures. Finally, the entrainer/extracted components portion of the equilibrium mixture is isolated from the separation solution. 1 fig.
Process for separating nitrogen from methane using microchannel process technology
Tonkovich, Anna Lee [Marysville, OH; Qiu, Dongming [Dublin, OH; Dritz, Terence Andrew [Worthington, OH; Neagle, Paul [Westerville, OH; Litt, Robert Dwayne [Westerville, OH; Arora, Ravi [Dublin, OH; Lamont, Michael Jay [Hilliard, OH; Pagnotto, Kristina M [Cincinnati, OH
2007-07-31
The disclosed invention relates to a process for separating methane or nitrogen from a fluid mixture comprising methane and nitrogen, the process comprising: (A) flowing the fluid mixture into a microchannel separator, the microchannel separator comprising a plurality of process microchannels containing a sorption medium, the fluid mixture being maintained in the microchannel separator until at least part of the methane or nitrogen is sorbed by the sorption medium, and removing non-sorbed parts of the fluid mixture from the microchannel separator; and (B) desorbing the methane or nitrogen from the sorption medium and removing the desorbed methane or nitrogen from the microchannel separator. The process is suitable for upgrading methane from coal mines, landfills, and other sub-quality sources.
Influence of apple pomace inclusion on the process of animal feed pelleting.
Maslovarić, Marijana D; Vukmirović, Đuro; Pezo, Lato; Čolović, Radmilo; Jovanović, Rade; Spasevski, Nedeljka; Tolimir, Nataša
2017-08-01
Apple pomace (AP) is the main by-product of apple juice production. Large amounts of this material disposed into landfills can cause serious environmental problems. One of the solutions is to utilise AP as animal feed. The aim of this study was to investigate the impact of dried AP inclusion into model mixtures made from conventional feedstuffs on pellet quality and pellet press performance. Three model mixtures, with different ratios of maize, sunflower meal and AP, were pelleted. Response surface methodology (RSM) was applied when designing the experiment. The simultaneous and interactive effects of apple pomace share (APS) in the mixtures, die thickness (DT) of the pellet press and initial moisture content of the mixtures (M), on pellet quality and production parameters were investigated. Principal component analysis (PCA) and standard score (SS) analysis were applied for comprehensive analysis of the experimental data. The increase in APS led to an improvement of pellet quality parameters: pellet durability index (PDI), hardness (H) and proportion of fines in pellets. The increase in DT and M resulted in pellet quality improvement. The increase in DT and APS resulted in higher energy consumption of the pellet press. APS was the most influential variable for PDI and H calculation, while APS and DT were the most influential variables in the calculation of pellet press energy consumption. PCA showed that the first two principal components could be considered sufficient for data representation. In conclusion, addition of dried AP to feed model mixtures significantly improved the quality of the pellets.
Compressive strength and hydration processes of concrete with recycled aggregates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenders, Eduardus A.B., E-mail: e.a.b.koenders@coc.ufrj.br; Microlab, Delft University of Technology; Pepe, Marco, E-mail: mapepe@unisa.it
2014-02-15
This paper deals with the correlation between the time evolution of the degree of hydration and the compressive strength of Recycled Aggregate Concrete (RAC) for different water to cement ratios and initial moisture conditions of the Recycled Concrete Aggregates (RCAs). Particularly, the influence of such moisture conditions is investigated by monitoring the hydration process and determining the compressive strength development of fully dry or fully saturated recycled aggregates in four RAC mixtures. Hydration processes are monitored via temperature measurements in hardening concrete samples and the time evolution of the degree of hydration is determined through a 1D hydration and heatmore » flow model. The effect of the initial moisture condition of RCAs employed in the considered concrete mixtures clearly emerges from this study. In fact, a novel conceptual method is proposed to predict the compressive strength of RAC-systems, from the initial mixture parameters and the hardening conditions. -- Highlights: •The concrete industry is more and more concerned with sustainability issues. •The use of recycled aggregates is a promising solution to enhance sustainability. •Recycled aggregates affect both hydration processes and compressive strength. •A fundamental approach is proposed to unveil the influence of recycled aggregates. •Some experimental comparisons are presented to validate the proposed approach.« less
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
Discrete element modelling of bedload transport
NASA Astrophysics Data System (ADS)
Loyer, A.; Frey, P.
2011-12-01
Discrete element modelling (DEM) has been widely used in solid mechanics and in granular physics. In this type of modelling, each individual particle is taken into account and intergranular interactions are modelled with simple laws (e.g. Coulomb friction). Gravity and contact forces permit to solve the dynamical behaviour of the system. DEM is interesting to model configurations and access to parameters not directly available in laboratory experimentation, hence the term "numerical experimentations" sometimes used to describe DEM. DEM was used to model bedload transport experiments performed at the particle scale with spherical glass beads in a steep and narrow flume. Bedload is the larger material that is transported on the bed on stream channels. It has a great geomorphic impact. Physical processes ruling bedload transport and more generally coarse-particle/fluid systems are poorly known, arguably because granular interactions have been somewhat neglected. An existing DEM code (PFC3D) already computing granular interactions was used. We implemented basic hydrodynamic forces to model the fluid interactions (buoyancy, drag, lift). The idea was to use the minimum number of ingredients to match the experimental results. Experiments were performed with one-size and two-size mixtures of coarse spherical glass beads entrained by a shallow turbulent and supercritical water flow down a steep channel with a mobile bed. The particle diameters were 4 and 6mm, the channel width 6.5mm (about the same width as the coarser particles) and the channel inclination was typically 10%. The water flow rate and the particle rate were kept constant at the upstream entrance and adjusted to obtain bedload transport equilibrium. Flows were filmed from the side by a high-speed camera. Using image processing algorithms made it possible to determine the position, velocity and trajectory of both smaller and coarser particles. Modelled and experimental particle velocity and concentration depth profiles were compared in the case of the one-size mixture. The turbulent fluid velocity profile was prescribed and attached to the variable upper bedline. Provided the upper bedline was calculated with a refined space and time resolution, a fair agreement between DEM and experiments was reached. Experiments with two-size mixtures were designed to study vertical grain size sorting or segregation patterns. Sorting is arguably the reason why the predictive capacity of bedload formulations remains so poor. Modelling of the two-size mixture was also performed and gave promising qualitative results.
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
ERIC Educational Resources Information Center
Duarte, B. P. M.; Coelho Pinheiro, M. N.; Silva, D. C. M.; Moura, M. J.
2006-01-01
The experiment described is an excellent opportunity to apply theoretical concepts of distillation, thermodynamics of mixtures and process simulation at laboratory scale, and simultaneously enhance the ability of students to operate, control and monitor complex units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dang-Long, T., E-mail: 3TE14098G@kyushu-u.ac.jp; Quang-Tuyen, T., E-mail: tran.tuyen.quang.314@m.kyushu-u.ac.jp; Shiratori, Y., E-mail: shiratori.yusuke.500@m.kyushu-u.ac.jp
2016-06-03
Being produced from organic matters of wastes (bio-wastes) through a fermentation process, biogas mainly composed of CH{sub 4} and CO{sub 2} and can be considered as a secondary energy carrier derived from solar energy. To generate electricity from biogas through the electrochemical process in fuel cells is a state-of-the-art technology possessing higher energy conversion efficiency without harmful emissions compared to combustion process in heat engines. Getting benefits from high operating temperature such as direct internal reforming ability and activation of electrochemical reactions to increase overall system efficiency, solid oxide fuel cell (SOFC) system operated with biogas becomes a promising candidatemore » for distributed power generator for rural applications leading to reductions of environmental issues caused by greenhouse effects and bio-wastes. CO{sub 2} reforming of CH{sub 4} and electrochemical oxidation of the produced syngas (H{sub 2}–CO mixture) are two main reaction processes within porous anode material of SOFC. Here catalytic and electrochemical behavior of Ni-ScSZ (scandia stabilized-zirconia) anode in the feed of CH{sub 4}–CO{sub 2} mixtures as simulated-biogas at 800 °C were evaluated. The results showed that CO{sub 2} had strong influences on both reaction processes. The increase in CO{sub 2} partial pressure resulted in the decrease in anode overvoltage, although open-circuit voltage was dropped. Besides that, the simulation result based on a power-law model for equimolar CH{sub 4}−CO{sub 2} mixture revealed that coking hazard could be suppressed along the fuel flow channel in both open-circuit and closed-circuit conditions.« less
NASA Astrophysics Data System (ADS)
Ou, Yihong; Du, Yang; Jiang, Xingsheng; Wang, Dong; Liang, Jianjun
2010-04-01
The study on the special phenomenon, occurrence process and control mechanism of gasoline-air mixture thermal ignition in underground oil depots is of important academic and applied value for enriching scientific theories of explosion safety, developing protective technology against fire and decreasing the number of fire accidents. In this paper, the research on thermal ignition process of gasoline-air mixture in model underground oil depots tunnel has been carried out by using experiment and numerical simulation methods. The calculation result has been demonstrated by the experiment data. The five stages of thermal ignition course, which are slow oxidation stage, rapid oxidation stage, fire stage, flameout stage and quench stage, have been firstly defined and accurately descried. According to the magnitude order of concentration, the species have been divided into six categories, which lay the foundation for explosion-proof design based on the role of different species. The influence of space scale on thermal ignition in small-scale space has been found, and the mechanism for not easy to fire is that the wall reflection causes the reflux of fluids and changes the distribution of heat and mass, so that the progress of chemical reactions in the whole space are also changed. The novel mathematical model on the basis of unification chemical kinetics and thermodynamics established in this paper provides supplementary means for the analysis of process and mechanism of thermal ignition.
A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.
Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie
2018-01-01
In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Lifetime of Feshbach dimers in a Fermi-Fermi mixture of 6Li and 40K
NASA Astrophysics Data System (ADS)
Jag, M.; Cetina, M.; Lous, R. S.; Grimm, R.; Levinsen, J.; Petrov, D. S.
2016-12-01
We present a joint experimental and theoretical investigation of the lifetime of weakly bound dimers formed near narrow interspecies Feshbach resonances in mass-imbalanced Fermi-Fermi systems, considering the specific example of a mixture of 6Li and 40K atoms. Our work addresses the central question of the increase in the stability of the dimers resulting from Pauli suppression of collisional losses, which is a well-known effect in mass-balanced fermionic systems near broad resonances. We present measurements of the spontaneous dissociation of dimers in dilute samples, and of the collisional losses in dense samples arising from both dimer-dimer processes and from atom-dimer processes. We find that all loss processes are suppressed close to the Feshbach resonance. Our general theoretical approach for fermionic mixtures near narrow Feshbach resonances provides predictions for the suppression of collisional decay as a function of the detuning from resonance, and we find excellent agreement with the experimental benchmarks provided by our 40K-6Li system. We finally present model calculations for other Feshbach-resonant Fermi-Fermi systems, which are of interest for experiments in the near future.
Honeybees Learn Odour Mixtures via a Selection of Key Odorants
Reinhard, Judith; Sinclair, Michael; Srinivasan, Mandyam V.; Claudianos, Charles
2010-01-01
Background The honeybee has to detect, process and learn numerous complex odours from her natural environment on a daily basis. Most of these odours are floral scents, which are mixtures of dozens of different odorants. To date, it is still unclear how the bee brain unravels the complex information contained in scent mixtures. Methodology/Principal Findings This study investigates learning of complex odour mixtures in honeybees using a simple olfactory conditioning procedure, the Proboscis-Extension-Reflex (PER) paradigm. Restrained honeybees were trained to three scent mixtures composed of 14 floral odorants each, and then tested with the individual odorants of each mixture. Bees did not respond to all odorants of a mixture equally: They responded well to a selection of key odorants, which were unique for each of the three scent mixtures. Bees showed less or very little response to the other odorants of the mixtures. The bees' response to mixtures composed of only the key odorants was as good as to the original mixtures of 14 odorants. A mixture composed of the other, non-key-odorants elicited a significantly lower response. Neither an odorant's volatility or molecular structure, nor learning efficiencies for individual odorants affected whether an odorant became a key odorant for a particular mixture. Odorant concentration had a positive effect, with odorants at high concentration likely to become key odorants. Conclusions/Significance Our study suggests that the brain processes complex scent mixtures by predominantly learning information from selected key odorants. Our observations on key odorant learning lend significant support to previous work on olfactory learning and mixture processing in honeybees. PMID:20161714
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shneider, Mikhail N.; Zhang Zhili; Miles, Richard B.
2008-07-15
Resonant enhanced multiphoton ionization (REMPI) and electron avalanche ionization (EAI) are measured simultaneously in Ar:Xe mixtures at different partial pressures of mixture components. A simple theory for combined REMPI+EAI in gas mixture is developed. It is shown that the REMPI electrons seed the avalanche process, and thus the avalanche process amplifies the REMPI signal. Possible applications are discussed.
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
NASA Astrophysics Data System (ADS)
Istomin, V. A.
2018-05-01
The software package Planet Atmosphere Investigator of Non-equilibrium Thermodynamics (PAINeT) has been devel-oped for studying the non-equilibrium effects associated with electronic excitation, chemical reactions and ionization. These studies are necessary for modeling process in shock tubes, in high enthalpy flows, in nozzles or jet engines, in combustion and explosion processes, in modern plasma-chemical and laser technologies. The advantages and possibilities of the package implementation are stated. Within the framework of the package implementation, based on kinetic theory approximations (one-temperature and state-to-state approaches), calculations are carried out, and the limits of applicability of a simplified description of shock-heated air flows and any other mixtures chosen by the user are given. Using kinetic theory algorithms, a numerical calculation of the heat fluxes and relaxation terms can be performed, which is necessary for further comparison of engineering simulation with experi-mental data. The influence of state-to-state distributions over electronic energy levels on the coefficients of thermal conductivity, diffusion, heat fluxes and diffusion velocities of the components of various gas mixtures behind shock waves is studied. Using the software package the accuracy of different approximations of the kinetic theory of gases is estimated. As an example state-resolved atomic ionized mixture of N/N+/O/O+/e- is considered. It is shown that state-resolved diffusion coefficients of neutral and ionized species vary from level to level. Comparing results of engineering applications with those given by PAINeT, recommendations for adequate models selection are proposed.
Issa Hamoud, Houeida; Finqueneisel, Gisèle; Azambre, Bruno
2017-06-15
In this study, the removal of binary mixtures of dyes with similar (Orange II/Acid Green 25) or opposite charges (Orange II/Malachite Green) was investigated either by simple adsorption on ceria or by the heterogeneous Fenton reaction in presence of H 2 O 2 . First, the CeO 2 nanocatalyst with high specific surface area (269 m 2 /g) and small crystal size (5 nm) was characterized using XRD, Raman spectroscopy and N 2 physisorption at 77 K. The adsorption of single dyes was studied either from thermodynamic and kinetic viewpoints. It is shown that the adsorption of dyes on ceria surface is highly pH-dependent and followed a pseudo-second order kinetic model. Adsorption isotherms fit well the Langmuir model with a complete monolayer coverage and higher affinity towards Orange II at pH 3, compared to other dyes. For the (Orange II/Acid Green 25) mixture, both the amounts of dyes adsorbed on ceria surface and discoloration rates measured from Fenton experiments were decreased by comparison with single dyes. This is due to the adsorption competition existing onto the same surface Ce x+ sites and the reaction competition with hydroxyl radicals, respectively. The behavior of the (Orange II/Malachite Green) mixture is markedly different. Dyes with opposite charges undergo paired adsorption on ceria as well as homogeneous and heterogeneous coagulation/flocculation processes, but can also be removed by heterogeneous Fenton process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
Protein and gene model inference based on statistical modeling in k-partite graphs.
Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter
2010-07-06
One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Premixing quality and flame stability: A theoretical and experimental study
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.; Heywood, J. B.; Tabaczynski, R. J.
1979-01-01
Models for predicting flame ignition and blowout in a combustor primary zone are presented. A correlation for the blowoff velocity of premixed turbulent flames is developed using the basic quantities of turbulent flow, and the laminar flame speed. A statistical model employing a Monte Carlo calculation procedure is developed to account for nonuniformities in a combustor primary zone. An overall kinetic rate equation is used to describe the fuel oxidation process. The model is used to predict the lean ignition and blow out limits of premixed turbulent flames; the effects of mixture nonuniformity on the lean ignition limit are explored using an assumed distribution of fuel-air ratios. Data on the effects of variations in inlet temperature, reference velocity and mixture uniformity on the lean ignition and blowout limits of gaseous propane-air flames are presented.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
2017-04-01
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.
Tomei, M Concetta; Mosca Angelucci, Domenica; Ademollo, Nicoletta; Daugulis, Andrew J
2015-03-01
Solid phase extraction performed with commercial polymer beads to treat soil contaminated by chlorophenols (4-chlorophenol, 2,4-dichlorophenol and pentachlorophenol) as single compounds and in a mixture has been investigated in this study. Soil-water-polymer partition tests were conducted to determine the relative affinities of single compounds in soil-water and polymer-water pairs. Subsequent soil extraction tests were performed with Hytrel 8206, the polymer showing the highest affinity for the tested chlorophenols. Factors that were examined were polymer type, moisture content, and contamination level. Increased moisture content (up to 100%) improved the extraction efficiency for all three compounds. Extraction tests at this upper level of moisture content showed removal efficiencies ≥70% for all the compounds and their ternary mixture, for 24 h of contact time, which is in contrast to the weeks and months, normally required for conventional ex situ remediation processes. A dynamic model characterizing the rate and extent of decontamination was also formulated, calibrated and validated with the experimental data. The proposed model, based on the simplified approach of "lumped parameters" for the mass transfer coefficients, provided very good predictions of the experimental data for the absorptive removal of contaminants from soil at different individual solute levels. Parameters evaluated from calibration by fitting of single compound data, have been successfully applied to predict mixture data, with differences between experimental and predicted data in all cases being ≤3%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Effect of rheological parameters on curing rate during NBR injection molding
NASA Astrophysics Data System (ADS)
Kyas, Kamil; Stanek, Michal; Manas, David; Skrobak, Adam
2013-04-01
In this work, non-isothermal injection molding process for NBR rubber mixture considering Isayev-Deng curing kinetic model, generalized Newtonian model with Carreau-WLF viscosity was modeled by using finite element method in order to understand the effect of volume flow rate, index of non-Newtonian behavior and relaxation time on the temperature profile and curing rate. It was found that for specific geometry and processing conditions, increase in relaxation time or in the index of non-Newtonian behavior increases the curing rate due to viscous dissipation taking place at the flow domain walls.
Pyrolysis process for producing fuel gas
NASA Technical Reports Server (NTRS)
Serio, Michael A. (Inventor); Kroo, Erik (Inventor); Wojtowicz, Marek A. (Inventor); Suuberg, Eric M. (Inventor)
2007-01-01
Solid waste resource recovery in space is effected by pyrolysis processing, to produce light gases as the main products (CH.sub.4, H.sub.2, CO.sub.2, CO, H.sub.2O, NH.sub.3) and a reactive carbon-rich char as the main byproduct. Significant amounts of liquid products are formed under less severe pyrolysis conditions, and are cracked almost completely to gases as the temperature is raised. A primary pyrolysis model for the composite mixture is based on an existing model for whole biomass materials, and an artificial neural network models the changes in gas composition with the severity of pyrolysis conditions.
Evaluation of parameters of color profile models of LCD and LED screens
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.
Adamovich, Igor V; Li, Ting; Lempert, Walter R
2015-08-13
This work describes the kinetic mechanism of coupled molecular energy transfer and chemical reactions in low-temperature air, H2-air and hydrocarbon-air plasmas sustained by nanosecond pulse discharges (single-pulse or repetitive pulse burst). The model incorporates electron impact processes, state-specific N(2) vibrational energy transfer, reactions of excited electronic species of N(2), O(2), N and O, and 'conventional' chemical reactions (Konnov mechanism). Effects of diffusion and conduction heat transfer, energy coupled to the cathode layer and gasdynamic compression/expansion are incorporated as quasi-zero-dimensional corrections. The model is exercised using a combination of freeware (Bolsig+) and commercial software (ChemKin-Pro). The model predictions are validated using time-resolved measurements of temperature and N(2) vibrational level populations in nanosecond pulse discharges in air in plane-to-plane and sphere-to-sphere geometry; temperature and OH number density after nanosecond pulse burst discharges in lean H(2)-air, CH(4)-air and C(2)H(4)-air mixtures; and temperature after the nanosecond pulse discharge burst during plasma-assisted ignition of lean H2-mixtures, showing good agreement with the data. The model predictions for OH number density in lean C(3)H(8)-air mixtures differ from the experimental results, over-predicting its absolute value and failing to predict transient OH rise and decay after the discharge burst. The agreement with the data for C(3)H(8)-air is improved considerably if a different conventional hydrocarbon chemistry reaction set (LLNL methane-n-butane flame mechanism) is used. The results of mechanism validation demonstrate its applicability for analysis of plasma chemical oxidation and ignition of low-temperature H(2)-air, CH(4)-air and C(2)H(4)-air mixtures using nanosecond pulse discharges. Kinetic modelling of low-temperature plasma excited propane-air mixtures demonstrates the need for development of a more accurate 'conventional' chemistry mechanism. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Communication: Concepts and Processes.
ERIC Educational Resources Information Center
DeVito, Joseph A.
A mixture of theoretical and practical essays points up the purposes of, barriers to, and means of facilitating communication. Four models of how people communicate are presented. A series of essays describing communication messages and channels include considerations of "gobbledygook," nonverbal communication by touch, smell, or body movement,…
Modification of Gaussian mixture models for data classification in high energy physics
NASA Astrophysics Data System (ADS)
Štěpánek, Michal; Franc, Jiří; Kůs, Václav
2015-01-01
In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
2011-01-01
Abstract Background The combinatorial library strategy of using multiple candidate ligands in mixtures as library members is ideal in terms of cost and efficiency, but needs special screening methods to estimate the affinities of candidate ligands in such mixtures. Herein, a new method to screen candidate ligands present in unknown molar quantities in mixtures was investigated. Results The proposed method involves preparing a processed-mixture-for-screening (PMFS) with each mixture sample and an exogenous reference ligand, initiating competitive binding among ligands from the PMFS to a target immobilized on magnetic particles, recovering target-ligand complexes in equilibrium by magnetic force, extracting and concentrating bound ligands, and analyzing ligands in the PMFS and the concentrated extract by chromatography. The relative affinity of each candidate ligand to its reference ligand is estimated via an approximation equation assuming (a) the candidate ligand and its reference ligand bind to the same site(s) on the target, (b) their chromatographic peak areas are over five times their intercepts of linear response but within their linear ranges, (c) their binding ratios are below 10%. These prerequisites are met by optimizing primarily the quantity of the target used and the PMFS composition ratio. The new method was tested using the competitive binding of biotin derivatives from mixtures to streptavidin immobilized on magnetic particles as a model. Each mixture sample containing a limited number of candidate biotin derivatives with moderate differences in their molar quantities were prepared via parallel-combinatorial-synthesis (PCS) without purification, or via the pooling of individual compounds. Some purified biotin derivatives were used as reference ligands. This method showed resistance to variations in chromatographic quantification sensitivity and concentration ratios; optimized conditions to validate the approximation equation could be applied to different mixture samples. Relative affinities of candidate biotin derivatives with unknown molar quantities in each mixture sample were consistent with those estimated by a homogenous method using their purified counterparts as samples. Conclusions This new method is robust and effective for each mixture possessing a limited number of candidate ligands whose molar quantities have moderate differences, and its integration with PCS has promise to routinely practice the mixture-based library strategy. PMID:21545719
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Reschke, Thomas; Zherikova, Kseniya V; Verevkin, Sergey P; Held, Christoph
2016-03-01
Benzoic acid is a model compound for drug substances in pharmaceutical research. Process design requires information about thermodynamic phase behavior of benzoic acid and its mixtures with water and organic solvents. This work addresses phase equilibria that determine stability and solubility. In this work, Perturbed-Chain Statistical Associating Fluid Theory (PC-SAFT) was used to model the phase behavior of aqueous and organic solutions containing benzoic acid and chlorobenzoic acids. Absolute vapor pressures of benzoic acid and 2-, 3-, and 4-chlorobenzoic acid from literature and from our own measurements were used to determine pure-component PC-SAFT parameters. Two binary interaction parameters between water and/or benzoic acid were used to model vapor-liquid and liquid-liquid equilibria of water and/or benzoic acid between 280 and 413 K. The PC-SAFT parameters and 1 binary interaction parameter were used to model aqueous solubility of the chlorobenzoic acids. Additionally, solubility of benzoic acid in organic solvents was predicted without using binary parameters. All results showed that pure-component parameters for benzoic acid and for the chlorobenzoic acids allowed for satisfying modeling phase equilibria. The modeling approach established in this work is a further step to screen solubility and to predict the whole phase region of mixtures containing pharmaceuticals. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
The single-zone numerical model of homogeneous charge compression ignition engine performance
NASA Astrophysics Data System (ADS)
Fedyanov, E. A.; Itkis, E. M.; Kuzmin, V. N.; Shumskiy, S. N.
2017-02-01
The single-zone model of methane-air mixture combustion in the Homogeneous Charge Compression Ignition engine was developed. First modeling efforts resulted in the selection of the detailed kinetic reaction mechanism, most appropriate for the conditions of the HCCI process. Then, the model was completed so as to simulate the performance of the four-stroke engine and was coupled by physically reasonable adjusting functions. Validation of calculations against experimental data showed acceptable agreement.
Comparative Analysis of InSAR Digital Surface Models for Test Area Bucharest
NASA Astrophysics Data System (ADS)
Dana, Iulia; Poncos, Valentin; Teleaga, Delia
2010-03-01
This paper presents the results of the interferometric processing of ERS Tandem, ENVISAT and TerraSAR- X for digital surface model (DSM) generation. The selected test site is Bucharest (Romania), a built-up area characterized by the usual urban complex pattern: mixture of buildings with different height levels, paved roads, vegetation, and water bodies. First, the DSMs were generated following the standard interferometric processing chain. Then, the accuracy of the DSMs was analyzed against the SPOT HRS model (30 m resolution at the equator). A DSM derived by optical stereoscopic processing of SPOT 5 HRG data and also the SRTM (3 arc seconds resolution at the equator) DSM have been included in the comparative analysis.
Bari, Quazi H; Koenig, Albert
2012-11-01
The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
Modeling chemical vapor deposition of silicon dioxide in microreactors at atmospheric pressure
NASA Astrophysics Data System (ADS)
Konakov, S. A.; Krzhizhanovskaya, V. V.
2015-01-01
We developed a multiphysics mathematical model for simulation of silicon dioxide Chemical Vapor Deposition (CVD) from tetraethyl orthosilicate (TEOS) and oxygen mixture in a microreactor at atmospheric pressure. Microfluidics is a promising technology with numerous applications in chemical synthesis due to its high heat and mass transfer efficiency and well-controlled flow parameters. Experimental studies of CVD microreactor technology are slow and expensive. Analytical solution of the governing equations is impossible due to the complexity of intertwined non-linear physical and chemical processes. Computer simulation is the most effective tool for design and optimization of microreactors. Our computational fluid dynamics model employs mass, momentum and energy balance equations for a laminar transient flow of a chemically reacting gas mixture at low Reynolds number. Simulation results show the influence of microreactor configuration and process parameters on SiO2 deposition rate and uniformity. We simulated three microreactors with the central channel diameter of 5, 10, 20 micrometers, varying gas flow rate in the range of 5-100 microliters per hour and temperature in the range of 300-800 °C. For each microchannel diameter we found an optimal set of process parameters providing the best quality of deposited material. The model will be used for optimization of the microreactor configuration and technological parameters to facilitate the experimental stage of this research.
Patterned surfaces in the drying of films composed of water, polymer, and alcohol
NASA Astrophysics Data System (ADS)
Fichot, Julie; Heyd, Rodolphe; Josserand, Christophe; Chourpa, Igor; Gombart, Emilie; Tranchant, Jean-Francois; Saboungi, Marie-Louise
2012-12-01
A study of the complex drying dynamics of polymeric mixtures with optical microscopy and gravimetric measurement is presented. Droplet formation is observed, followed by a collapse that leads to the residual craters in the dried film. The process is followed in situ under well-defined temperature and hygrometric conditions to determine the origin and nature of these droplets and craters. The drying process is usually completed within 1 h. The observations are explained using a simple diffusion model based on experimental results collected from mass and optical measurements as well as Raman confocal microspectrometry. Although the specific polymeric mixtures used here are of interest to the cosmetic industry, the general conclusions reached can apply to other polymeric aqueous solutions with applications to commercial and artistic painting.
Improved materials and processes of dispenser cathodes
NASA Astrophysics Data System (ADS)
Longo, R. T.; Sundquist, W. F.; Adler, E. A.
1984-08-01
Several process variables affecting the final electron emission properties of impregnated dispenser cathodes were investigated. In particular, the influence of billet porosity, impregnant composition and purity, and osmium-ruthenium coating were studied. Work function and cathode evaporation data were used to evaluate cathode performance and to formulate a model of cathode activation and emission. Results showed that sorted tungsten powder can be reproducibly fabricated into cathode billets. Billet porosity was observed to have the least effect on cathode performance. Use of the 4:1:1 aluminate mixture resulted in lower work functions than did use of the 5:3:2 mixture. Under similar drawout conditions, the coated cathodes showed superior emission relative to uncoated cathodes. In actual Pierce gun structures under accelerated life test, the influence of impregnated sulfur is clearly shown to reduce cathode performance.
Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data.
van Maanen, Leendert; Couto, Joaquina; Lebreton, Mael
2016-01-01
The notion of "mixtures" has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied-for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-02
... 1117-AA66 Chemical Mixtures Containing Listed Forms of Phosphorus and Change in Application Process... establish those chemical mixtures containing red phosphorus or hypophosphorous acid and its salts (hereinafter ``regulated phosphorus'') that shall automatically qualify for exemption from the [[Page 31825...
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.
Isolation of Precursor Cells from Waste Solid Fat Tissue
NASA Technical Reports Server (NTRS)
Byerly, Diane; Sognier, Marguerite A.
2009-01-01
A process for isolating tissue-specific progenitor cells exploits solid fat tissue obtained as waste from such elective surgical procedures as abdominoplasties (tummy tucks) and breast reductions. Until now, a painful and risky process of aspiration of bone marrow has been used to obtain a limited number of tissue- specific progenitor cells. The present process yields more tissue-specific progenitor cells and involves much less pain and risk for the patient. This process includes separation of fat from skin, mincing of the fat into small pieces, and forcing a fat saline mixture through a sieve. The mixture is then digested with collagenase type I in an incubator. After centrifugation tissue-specific progenitor cells are recovered and placed in a tissue-culture medium in flasks or Petri dishes. The tissue-specific progenitor cells can be used for such purposes as (1) generating three-dimensional tissue equivalent models for studying bone loss and muscle atrophy (among other deficiencies) and, ultimately, (2) generating replacements for tissues lost by the fat donor because of injury or disease.
Computational Modeling of Seismic Wave Propagation Velocity-Saturation Effects in Porous Rocks
NASA Astrophysics Data System (ADS)
Deeks, J.; Lumley, D. E.
2011-12-01
Compressional and shear velocities of seismic waves propagating in porous rocks vary as a function of the fluid mixture and its distribution in pore space. Although it has been possible to place theoretical upper and lower bounds on the velocity variation with fluid saturation, predicting the actual velocity response of a given rock with fluid type and saturation remains an unsolved problem. In particular, we are interested in predicting the velocity-saturation response to various mixtures of fluids with pressure and temperature, as a function of the spatial distribution of the fluid mixture and the seismic wavelength. This effect is often termed "patchy saturation' in the rock physics community. The ability to accurately predict seismic velocities for various fluid mixtures and spatial distributions in the pore space of a rock is useful for fluid detection, hydrocarbon exploration and recovery, CO2 sequestration and monitoring of many subsurface fluid-flow processes. We create digital rock models with various fluid mixtures, saturations and spatial distributions. We use finite difference modeling to propagate elastic waves of varying frequency content through these digital rock and fluid models to simulate a given lab or field experiment. The resulting waveforms can be analyzed to determine seismic traveltimes, velocities, amplitudes, attenuation and other wave phenomena for variable rock models of fluid saturation and spatial fluid distribution, and variable wavefield spectral content. We show that we can reproduce most of the published effects of velocity-saturation variation, including validating the Voigt and Reuss theoretical bounds, as well as the Hill "patchy saturation" curve. We also reproduce what has been previously identified as Biot dispersion, but in fact in our models is often seen to be wave multi-pathing and broadband spectral effects. Furthermore, we find that in addition to the dominant seismic wavelength and average fluid patch size, the smoothness of the fluid patches are a critical factor in determining the velocity-saturation response; this is a result that we have not seen discussed in the literature. Most importantly, we can reproduce all of these effects using full elastic wavefield scattering, without the need to resort to more complicated squirt-flow or poroelastic models. This is important because the physical properties and parameters we need to model full elastic wave scattering, and predict a velocity-saturation curve, are often readily available for projects we undertake; this is not the case for poroelastic or squirt-flow models. We can predict this velocity saturation curve for a specific rock type, fluid mixture distribution and wavefield spectrum.
Blocking and the detection of odor components in blends.
Hosler, J S; Smith, B H
2000-09-01
Recent studies of olfactory blocking have revealed that binary odorant mixtures are not always processed as though they give rise to mixture-unique configural properties. When animals are conditioned to one odorant (A) and then conditioned to a mixture of that odorant with a second (X), the ability to learn or express the association of X with reinforcement appears to be reduced relative to animals that were not preconditioned to A. A recent model of odor-based response patterns in the insect antennal lobe predicts that the strength of the blocking effect will be related to the perceptual similarity between the two odorants, i.e. greater similarity should increase the blocking effect. Here, we test that model in the honeybee Apis mellifera by first establishing a generalization matrix for three odorants and then testing for blocking between all possible combinations of them. We confirm earlier findings demonstrating the occurrence of the blocking effect in olfactory learning of compound stimuli. We show that the occurrence and the strength of the blocking effect depend on the odorants used in the experiment. In addition, we find very good agreement between our results and the model, and less agreement between our results and an alternative model recently proposed to explain the effect.
NASA Astrophysics Data System (ADS)
Aretusini, S.; Mittempergher, S.; Spagnuolo, E.; Di Toro, G.; Gualtieri, A.; Plümper, O.
2015-12-01
Slipping zones in shallow sections of megathrusts and large landslides are often made of smectite and quartz gouge mixtures. Experiments aimed at investigating the frictional processes operating at high slip rates (>1 m/s) may unravel the mechanics of these natural phenomena. Here we present a new dataset obtained with two rotary shear apparatus (ROSA, Padua University; SHIVA, INGV-Rome). Experiments were performed at room humidity and temperature on four mixtures of smectite (Ca-Montmorillonite) and quartz with 68, 50, 25, 0 wt% of smectite. The gouges were slid for 3 m at normal stress of 5 MPa and slip rate V from 300 µm/s to 1.5 m/s. Temperature during the experiments was monitored with four thermocouples and modeled with COMSOL Multiphysics. In smectite-rich mixtures, the friction coefficient µ evolved with slip according to three slip rate regimes: in regime 1 (V<0.1 m/s) initial slip-weakening was followed by slip-strengthening; in regime 2 (0.1
Induction of Adipocyte Differentiation by Polybrominated Diphenyl Ethers (PBDEs) in 3T3-L1 Cells
Tung, Emily W. Y.; Boudreau, Adèle; Wade, Michael G.; Atlas, Ella
2014-01-01
Polybrominated diphenyl ethers (PBDEs) are a class of brominated flame retardants that were extensively used in commercial products. PBDEs are ubiquitous environmental contaminants that are both lipophilic and bioaccumulative. Effects of PBDEs on adipogenesis were studied in the 3T3-L1 preadipocyte cell model in the presence and absence of a known adipogenic agent, dexamethasone (DEX). A PBDE mixture designed to mimic body burden of North Americans was tested, in addition to the technical mixture DE-71 and the individual congener BDE-47. The mixture, DE-71, and BDE-47 all induced adipocyte differentiation as assessed by markers for terminal differentiation [fatty acid binding protein 4 (aP2) and perilipin] and lipid accumulation. Characterization of the differentiation process in response to PBDEs indicated that adipogenesis induced by a minimally effective dose of DEX was enhanced by these PBDEs. Moreover, C/EBPα, PPARγ, and LXRα were induced late in the differentiation process. Taken together, these data indicate that adipocyte differentiation is induced by PBDEs; they act in the absence of glucocorticoid and enhance glucocorticoid-mediated adipogenesis. PMID:24722056
Rabbit Neonates and Human Adults Perceive a Blending 6-Component Odor Mixture in a Comparable Manner
Sinding, Charlotte; Thomas-Danguin, Thierry; Chambault, Adeline; Béno, Noelle; Dosne, Thibaut; Chabanet, Claire; Schaal, Benoist; Coureaud, Gérard
2013-01-01
Young and adult mammals are constantly exposed to chemically complex stimuli. The olfactory system allows for a dual processing of relevant information from the environment either as single odorants in mixtures (elemental perception) or as mixtures of odorants as a whole (configural perception). However, it seems that human adults have certain limits in elemental perception of odor mixtures, as suggested by their inability to identify each odorant in mixtures of more than 4 components. Here, we explored some of these limits by evaluating the perception of three 6-odorant mixtures in human adults and newborn rabbits. Using free-sorting tasks in humans, we investigated the configural or elemental perception of these mixtures, or of 5-component sub-mixtures, or of the 6-odorant mixtures with modified odorants' proportion. In rabbit pups, the perception of the same mixtures was evaluated by measuring the orocephalic sucking response to the mixtures or their components after conditioning to one of these stimuli. The results revealed that one mixture, previously shown to carry the specific odor of red cordial in humans, was indeed configurally processed in humans and in rabbits while the two other 6-component mixtures were not. Moreover, in both species, such configural perception was specific not only to the 6 odorants included in the mixture but also to their respective proportion. Interestingly, rabbit neonates also responded to each odorant after conditioning to the red cordial mixture, which demonstrates their ability to perceive elements in addition to configuration in this complex mixture. Taken together, the results provide new insights related to the processing of relatively complex odor mixtures in mammals and the inter-species conservation of certain perceptual mechanisms; the results also revealed some differences in the expression of these capacities between species putatively linked to developmental and ecological constraints. PMID:23341948
PROCESS OF PRODUCING SHAPED PLUTONIUM
Anicetti, R.J.
1959-08-11
A process is presented for producing and casting high purity plutonium metal in one step from plutonium tetrafluoride. The process comprises heating a mixture of the plutonium tetrafluoride with calcium while the mixture is in contact with and defined as to shape by a material obtained by firing a mixture consisting of calcium oxide and from 2 to 10% by its weight of calcium fluoride at from 1260 to 1370 deg C.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo
2016-04-01
This study was to investigate the sleep promoting effects of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP), by examining neuronal processes governing mRNA level alterations, as well as assessing neuromodulator concentrations, in a fruit fly model. Behavioral assays were applied to investigate subjective nighttime activity, sleep episodes, and total duration of subjective nighttime sleep of two amino acids and GABA/5-HTP mixture with caffeine treated flies. Also, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. Subjective nighttime activity and sleep patterns of individual flies significantly decreased with 1% GABA treatment in conjunction with 0.1% 5-HTP treatment (p<0.001). Furthermore, GABA/5-HTP mixture resulted in significant differences between groups related to sleep patterns (40%, p<0.017) and significantly induced subjective nighttime sleep in the awake model (p<0.003). These results related to transcript levels of the GABAB receptor (GABAB-R1) and serotonin receptor (5-HT1A), compared to the control group. In addition, GABA/5-HTP mixture significantly increased GABA levels 1h and 12h following treatment (2.1 fold and 1.2 fold higher than the control, respectively) and also increased 5-HTP levels (0 h: 1.01 μg/protein, 12h: 3.45 μg/protein). In this regard, we successfully demonstrated that using a GABA/5-HTP mixture modulates subjective nighttime activity, sleep episodes, and total duration of subjective nighttime sleep to a greater extent than single administration of each amino acid, and that this modulation occurs via GABAergic and serotonergic signaling. Copyright © 2016 Elsevier Inc. All rights reserved.
Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne
2010-01-01
Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
NASA Astrophysics Data System (ADS)
Lee, Hsiang-He; Chen, Shu-Hua; Kleeman, Michael J.; Zhang, Hongliang; DeNero, Steven P.; Joe, David K.
2016-07-01
The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and was applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-D chemical variable (X, Z, Y, size bins, source types, species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and long-wave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into cloud condensation nuclei (CCN) at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.
Response Times to Gustatory–Olfactory Flavor Mixtures: Role of Congruence
Shepard, Timothy G.; Veldhuizen, Maria G.
2015-01-01
A mixture of perceptually congruent gustatory and olfactory flavorants (sucrose and citral) was previously shown to be detected faster than predicted by a model of probability summation that assumes stochastically independent processing of the individual gustatory and olfactory signals. This outcome suggests substantial integration of the signals. Does substantial integration also characterize responses to mixtures of incongruent flavorants? Here, we report simple response times (RTs) to detect brief pulses of 3 possible flavorants: monosodium glutamate, MSG (gustatory: “umami” quality), citral (olfactory: citrus quality), and a mixture of MSG and citral (gustatory–olfactory). Each stimulus (and, on a fraction of trials, water) was presented orally through a computer-operated, automated flow system, and subjects were instructed to press a button as soon as they detected any of the 3 non-water stimuli. Unlike responses previously found to the congruent mixture of sucrose and citral, responses here to the incongruent mixture of MSG and citral took significantly longer (RTs were greater) and showed lower detection rates than the values predicted by probability summation. This outcome suggests that the integration of gustatory and olfactory flavor signals is less extensive when the component flavors are perceptually incongruent rather than congruent, perhaps because incongruent flavors are less familiar. PMID:26304508
Montiel-González, Zeuz; Escobar, Salvador; Nava, Rocío; del Río, J. Antonio; Tagüeña-Martínez, Julia
2016-01-01
Current research on porous silicon includes the construction of complex structures with luminescent and/or photonic properties. However, their preparation with both characteristics is still challenging. Recently, our group reported a possible method to achieve that by adding an oxidant mixture to the electrolyte used to produce porous silicon. This mixture can chemically modify their microstructure by changing the thickness and surface passivation of the pore walls. In this work, we prepared a series of samples (with and without oxidant mixture) and we evaluated the structural differences through their scanning electron micrographs and their optical properties determined by spectroscopic ellipsometry. The results showed that ellipsometry is sensitive to slight variations in the porous silicon structure, caused by changes in their preparation. The fitting process, based on models constructed from the features observed in the micrographs, allowed us to see that the mayor effect of the oxidant mixture is on samples of high porosity, where the surface oxidation strongly contributes to the skeleton thinning during the electrochemical etching. This suggests the existence of a porosity threshold for the action of the oxidant mixture. These results could have a significant impact on the design of complex porous silicon structures for different optoelectronic applications. PMID:27097767
Montiel-González, Zeuz; Escobar, Salvador; Nava, Rocío; del Río, J Antonio; Tagüeña-Martínez, Julia
2016-04-21
Current research on porous silicon includes the construction of complex structures with luminescent and/or photonic properties. However, their preparation with both characteristics is still challenging. Recently, our group reported a possible method to achieve that by adding an oxidant mixture to the electrolyte used to produce porous silicon. This mixture can chemically modify their microstructure by changing the thickness and surface passivation of the pore walls. In this work, we prepared a series of samples (with and without oxidant mixture) and we evaluated the structural differences through their scanning electron micrographs and their optical properties determined by spectroscopic ellipsometry. The results showed that ellipsometry is sensitive to slight variations in the porous silicon structure, caused by changes in their preparation. The fitting process, based on models constructed from the features observed in the micrographs, allowed us to see that the mayor effect of the oxidant mixture is on samples of high porosity, where the surface oxidation strongly contributes to the skeleton thinning during the electrochemical etching. This suggests the existence of a porosity threshold for the action of the oxidant mixture. These results could have a significant impact on the design of complex porous silicon structures for different optoelectronic applications.
Rock Content Influence on Soil Hydraulic Properties
NASA Astrophysics Data System (ADS)
Parajuli, K.; Sadeghi, M.; Jones, S. B.
2015-12-01
Soil hydraulic properties including the soil water retention curve (SWRC) and hydraulic conductivity function are important characteristics of soil affecting a variety of soil properties and processes. The hydraulic properties are commonly measured for seived soils (i.e. particles < 2 mm), but many natural soils include rock fragments of varying size that alter bulk hydraulic properties. Relatively few studies have addressed this important problem using physically-based concepts. Motivated by this knowledge gap, we set out to describe soil hydraulic properties using binary mixtures (i.e. rock fragment inclusions in a soil matrix) based on individual properties of the rock and soil. As a first step of this study, special attention was devoted to the SWRC, where the impact of rock content on the SWRC was quantified using laboratory experiments for six different mixing ratios of soil matrix and rock. The SWRC for each mixture was obtained from water mass and water potential measurements. The resulting data for the studied mixtures yielded a family of SWRC indicating how the SWRC of the mixture is related to that of the individual media, i.e., soil and rock. A consistent model was also developed to describe the hydraulic properties of the mixture as a function of the individual properties of the rock and soil matrix. Key words: Soil hydraulic properties, rock content, binary mixture, experimental data.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
On the Theory of Reactive Mixtures for Modeling Biological Growth
Ateshian, Gerard A.
2013-01-01
Mixture theory, which can combine continuum theories for the motion and deformation of solids and fluids with general principles of chemistry, is well suited for modeling the complex responses of biological tissues, including tissue growth and remodeling, tissue engineering, mechanobiology of cells and a variety of other active processes. A comprehensive presentation of the equations of reactive mixtures of charged solid and fluid constituents is lacking in the biomechanics literature. This study provides the conservation laws and entropy inequality, as well as interface jump conditions, for reactive mixtures consisting of a constrained solid mixture and multiple fluid constituents. The constituents are intrinsically incompressible and may carry an electrical charge. The interface jump condition on the mass flux of individual constituents is shown to define a surface growth equation, which predicts deposition or removal of material points from the solid matrix, complementing the description of volume growth described by the conservation of mass. A formu-lation is proposed for the reference configuration of a body whose material point set varies with time. State variables are defined which can account for solid matrix volume growth and remodeling. Constitutive constraints are provided on the stresses and momentum supplies of the various constituents, as well as the interface jump conditions for the electrochem cal potential of the fluids. Simplifications appropriate for biological tissues are also proposed, which help reduce the governing equations into a more practical format. It is shown that explicit mechanisms of growth-induced residual stresses can be predicted in this framework. PMID:17206407
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures
NASA Astrophysics Data System (ADS)
Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.
2017-10-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.
ERIC Educational Resources Information Center
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk
2008-01-01
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
Method for producing catalysis from coal
Farcasiu, Malvina; Derbyshire, Frank; Kaufman, Phillip B.; Jagtoyen, Marit
1998-01-01
A method for producing catalysts from coal is provided comprising mixing an aqueous alkali solution with the coal, heating the aqueous mixture to treat the coal, drying the now-heated aqueous mixture, reheating the mixture to form carbonized material, cooling the mixture, removing excess alkali from the carbonized material, and recovering the carbonized material, wherein the entire process is carried out in controlled atmospheres, and the carbonized material is a hydrocracking or hydrodehalogenation catalyst for liquid phase reactions. The invention also provides for a one-step method for producing catalysts from coal comprising mixing an aqueous alkali solution with the coal to create a mixture, heating the aqueous mixture from an ambient temperature to a predetermined temperature at a predetermined rate, cooling the mixture, and washing the mixture to remove excess alkali from the treated and carbonized material, wherein the entire process is carried out in a controlled atmosphere.
Method for producing catalysts from coal
Farcasiu, M.; Derbyshire, F.; Kaufman, P.B.; Jagtoyen, M.
1998-02-24
A method for producing catalysts from coal is provided comprising mixing an aqueous alkali solution with the coal, heating the aqueous mixture to treat the coal, drying the now-heated aqueous mixture, reheating the mixture to form carbonized material, cooling the mixture, removing excess alkali from the carbonized material, and recovering the carbonized material, wherein the entire process is carried out in controlled atmospheres, and the carbonized material is a hydrocracking or hydrodehalogenation catalyst for liquid phase reactions. The invention also provides for a one-step method for producing catalysts from coal comprising mixing an aqueous alkali solution with the coal to create a mixture, heating the aqueous mixture from an ambient temperature to a predetermined temperature at a predetermined rate, cooling the mixture, and washing the mixture to remove excess alkali from the treated and carbonized material, wherein the entire process is carried out in a controlled atmosphere. 1 fig.
Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...
Process for removing cadmium from scrap metal
Kronberg, J.W.
1995-04-11
A process is described for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to expose additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal. 2 figures.
Process for removing cadmium from scrap metal
Kronberg, J.W.
1994-01-01
A process for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to exposure additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal.
Process for removing cadmium from scrap metal
Kronberg, James W.
1995-01-01
A process for the recovery of a metal, in particular, cadmium contained in scrap, in a stable form. The process comprises the steps of mixing the cadmium-containing scrap with an ammonium carbonate solution, preferably at least a stoichiometric amount of ammonium carbonate, and/or free ammonia, and an oxidizing agent to form a first mixture so that the cadmium will react with the ammonium carbonate to form a water-soluble ammine complex; evaporating the first mixture so that ammine complex dissociates from the first mixture leaving carbonate ions to react with the cadmium and form a second mixture that includes cadmium carbonate; optionally adding water to the second mixture to form a third mixture; adjusting the pH of the third mixture to the acid range whereby the cadmium carbonate will dissolve; and adding at least a stoichiometric amount of sulfide, preferably in the form of hydrogen sulfide or an aqueous ammonium sulfide solution, to the third mixture to precipitate cadmium sulfide. This mixture of sulfide is then preferably digested by heating to facilitate precipitation of large particles of cadmium sulfide. The scrap may be divided by shredding or breaking up to expose additional surface area. Finally, the precipitated cadmium sulfide can be mixed with glass formers and vitrified for permanent disposal.
Cohen, M.R.; Gal, E.
1993-04-13
A process and system are described for simultaneously removing from a gaseous mixture, sulfur oxides by means of a solid sulfur oxide acceptor on a porous carrier, nitrogen oxides by means of ammonia gas and particulate matter by means of filtration and for the regeneration of loaded solid sulfur oxide acceptor. Finely-divided solid sulfur oxide acceptor is entrained in a gaseous mixture to deplete sulfur oxides from the gaseous mixture, the finely-divided solid sulfur oxide acceptor being dispersed on a porous carrier material having a particle size up to about 200 microns. In the process, the gaseous mixture is optionally pre-filtered to remove particulate matter and thereafter finely-divided solid sulfur oxide acceptor is injected into the gaseous mixture.
Creissen, Henry E.; Jorgensen, Tove H.; Brown, James K.M.
2016-01-01
Crop variety mixtures have the potential to increase yield stability in highly variable and unpredictable environments, yet knowledge of the specific mechanisms underlying enhanced yield stability has been limited. Ecological processes in genetically diverse crops were investigated by conducting field trials with winter barley varieties (Hordeum vulgare), grown as monocultures or as three-way mixtures in fungicide treated and untreated plots at three sites. Mixtures achieved yields comparable to the best performing monocultures whilst enhancing yield stability despite being subject to multiple predicted and unpredicted abiotic and biotic stresses including brown rust (Puccinia hordei) and lodging. There was compensation through competitive release because the most competitive variety overyielded in mixtures thereby compensating for less competitive varieties. Facilitation was also identified as an important ecological process within mixtures by reducing lodging. This study indicates that crop varietal mixtures have the capacity to stabilise productivity even when environmental conditions and stresses are not predicted in advance. Varietal mixtures provide a means of increasing crop genetic diversity without the need for extensive breeding efforts. They may confer enhanced resilience to environmental stresses and thus be a desirable component of future cropping systems for sustainable arable farming. PMID:27375312
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.
2017-12-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.
Investigation of pressure drop in capillary tube for mixed refrigerant Joule-Thomson cryocooler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ardhapurkar, P. M.; Sridharan, Arunkumar; Atrey, M. D.
2014-01-29
A capillary tube is commonly used in small capacity refrigeration and air-conditioning systems. It is also a preferred expansion device in mixed refrigerant Joule-Thomson (MR J-T) cryocoolers, since it is inexpensive and simple in configuration. However, the flow inside a capillary tube is complex, since flashing process that occurs in case of refrigeration and air-conditioning systems is metastable. A mixture of refrigerants such as nitrogen, methane, ethane, propane and iso-butane expands below its inversion temperature in the capillary tube of MR J-T cryocooler and reaches cryogenic temperature. The mass flow rate of refrigerant mixture circulating through capillary tube depends onmore » the pressure difference across it. There are many empirical correlations which predict pressure drop across the capillary tube. However, they have not been tested for refrigerant mixtures and for operating conditions of the cryocooler. The present paper assesses the existing empirical correlations for predicting overall pressure drop across the capillary tube for the MR J-T cryocooler. The empirical correlations refer to homogeneous as well as separated flow models. Experiments are carried out to measure the overall pressure drop across the capillary tube for the cooler. Three different compositions of refrigerant mixture are used to study the pressure drop variations. The predicted overall pressure drop across the capillary tube is compared with the experimentally obtained value. The predictions obtained using homogeneous model show better match with the experimental results compared to separated flow models.« less
Efficient and robust relaxation procedures for multi-component mixtures including phase transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de
We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Study on length distribution of ramie fibers
USDA-ARS?s Scientific Manuscript database
The extra-long length of ramie fibers and the high variation in fiber length has a negative impact on the spinning processes. In order to better study the feature of ramie fiber length, in this research, the probability density function of the mixture model applied in the characterization of cotton...
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rath, Swagat S., E-mail: swagat.rath@gmail.com; Nayak, Pradeep; Mukherjee, P.S.
2012-03-15
Highlights: Black-Right-Pointing-Pointer Sentences/phrases were modified. Black-Right-Pointing-Pointer Necessary discussions for different figures were included. Black-Right-Pointing-Pointer More discussion have been included on the flue gas analysis. Black-Right-Pointing-Pointer Queries to both the reviewers have been given. - Abstract: The global crisis of the hazardous electronic waste (E-waste) is on the rise due to increasing usage and disposal of electronic devices. A process was developed to treat E-waste in an environmentally benign process. The process consisted of thermal plasma treatment followed by recovery of metal values through mineral acid leaching. In the thermal process, the E-waste was melted to recover the metal values asmore » a metallic mixture. The metallic mixture was subjected to acid leaching in presence of depolarizer. The leached liquor mainly contained copper as the other elements like Al and Fe were mostly in alloy form as per the XRD and phase diagram studies. Response surface model was used to optimize the conditions for leaching. More than 90% leaching efficiency at room temperature was observed for Cu, Ni and Co with HCl as the solvent, whereas Fe and Al showed less than 40% efficiency.« less
Research on cylinder processes of gasoline homogenous charge compression ignition (HCCI) engine
NASA Astrophysics Data System (ADS)
Cofaru, Corneliu
2017-10-01
This paper is designed to develop a HCCI engine starting from a spark ignition engine platform. The engine test was a single cylinder, four strokes provided with carburetor. The results of experimental research on this version were used as a baseline for the next phase of the work. After that, the engine was modified for a HCCI configuration, the carburetor was replaced by a direct fuel injection system in order to control precisely the fuel mass per cycle taking into account the measured intake air-mass. To ensure that the air - fuel mixture auto ignite, the compression ratio was increased from 9.7 to 11.5. The combustion process in HCCI regime is governed by chemical kinetics of mixture of air-fuel, rein ducted or trapped exhaust gases and fresh charge. To modify the quantities of trapped burnt gases, the exchange gas system was changed from fixed timing to variable valve timing. To analyze the processes taking place in the HCCI engine and synthesizing a control system, a model of the system which takes into account the engine configuration and operational parameters are needed. The cylinder processes were simulated on virtual model. The experimental research works were focused on determining the parameters which control the combustion timing of HCCI engine to obtain the best energetic and ecologic parameters.
Optimizing surface finishing processes through the use of novel solvents and systems
NASA Astrophysics Data System (ADS)
Quillen, M.; Holbrook, P.; Moore, J.
2007-03-01
As the semiconductor industry continues to implement the ITRS (International Technology Roadmap for Semiconductors) node targets that go beyond 45nm [1], the need for improved cleanliness between repeated process steps continues to grow. Wafer cleaning challenges cover many applications such as Cu/low-K integration, where trade-offs must be made between dielectric damage and residue by plasma etching and CMP or moisture uptake by aqueous cleaning products. [2-5] Some surface sensitive processes use the Marangoni tool design [6] where a conventional solvent such as IPA (isopropanol), combines with water to provide improved physical properties such as reduced contact angle and surface tension. This paper introduces the use of alternative solvents and their mixtures compared to pure IPA in removing ionics, moisture, and particles using immersion bench-chemistry models of various processes. A novel Eastman proprietary solvent, Eastman methyl acetate is observed to provide improvement in ionic, moisture capture, and particle removal, as compared to conventional IPA. [7] These benefits may be improved relative to pure IPA, simply by the addition of various additives. Some physical properties of the mixtures were found to be relatively unchanged even as measured performance improved. This report presents our attempts to cite and optimize these benefits through the use of laboratory models.
Sterilization of fermentation vessels by ethanol/water mixtures
Wyman, Charles E.
1999-02-09
A method for sterilizing process fermentation vessels with a concentrated alcohol and water mixture integrated in a fuel alcohol or other alcohol production facility. Hot, concentrated alcohol is drawn from a distillation or other purification stage and sprayed into the empty fermentation vessels. This sterilizing alcohol/water mixture should be of a sufficient concentration, preferably higher than 12% alcohol by volume, to be toxic to undesirable microorganisms. Following sterilization, this sterilizing alcohol/water mixture can be recovered back into the same distillation or other purification stage from which it was withdrawn. The process of this invention has its best application in, but is not limited to, batch fermentation processes, wherein the fermentation vessels must be emptied, cleaned, and sterilized following completion of each batch fermentation process.
NASA Astrophysics Data System (ADS)
Guillevic, Myriam; Pascale, Céline; Mutter, Daniel; Wettstein, Sascha; Niederhauser, Bernhard
2017-04-01
In the framework of METAS' AtmoChem-ECV project, new facilities are currently being developed to generate reference gas mixtures for water vapour at concentrations measured in the high troposphere and polar regions, in the range 1-20 µmol/mol (ppm). The generation method is dynamic (the mixture is produced continuously over time) and SI-traceable (i.e. the amount of substance fraction in mole per mole is traceable to the definition of SI-units). The generation process is composed of three successive steps. The first step is to purify the matrix gas, nitrogen or synthetic air. Second, this matrix gas is spiked with the pure substance using a permeation technique: a permeation device contains a few grams of pure water in liquid form and loses it linearly over time by permeation through a membrane. In a third step, to reach the desired concentration, the first, high concentration mixture exiting the permeation chamber is then diluted with a chosen flow of matrix gas with one or two subsequent dilution steps. All flows are piloted by mass flow controllers. All parts in contact with the gas mixture are passivated using coated surfaces, to reduce adsorption/desorption processes as much as possible. The mixture can eventually be directly used to calibrate an analyser. The standard mixture produced by METAS' dynamic setup was injected into a chilled mirror from MBW Calibration AG, the designated institute for absolute humidity calibration in Switzerland. The used chilled mirror, model 373LX, is able to measure frost point and sample pressure and therefore calculate the water vapour concentration. This intercomparison of the two systems was performed in the range 4-18 ppm water vapour in synthetic air, at two different pressure levels, 1013.25 hPa and 2000 hPa. We present here METAS' dynamic setup, its uncertainty budget and the first results of the intercomparison with MBW's chilled mirror.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.
Vaiopoulou, Eleni; Misiti, Teresa M; Pavlostathis, Spyros G
2015-03-01
A commercial naphthenic acids (NAs) mixture (TCI Chemicals) and five model NA compounds were ozonated in a semibatch mode. Ozonation of 25 and 35 mg/L NA mixture followed pseudo first-order kinetics (k(obs)=0.11±0.008 min(-1); r(2)=0.989) with a residual NAs concentration of about 5 mg/L. Ozone reacted preferentially with NAs of higher cyclicity and molecular weight and decreased both cyclicity and the acute Microtox® toxicity by 3.3-fold. The ozone reactivity with acyclic and monocyclic model NAs varied and depended on other structural features, such as branching and the presence of tertiary or quaternary carbons. Batch aerobic degradation of unozonated NA mixture using a NA-enriched culture resulted in 83% NA removal and a 6.7-fold decrease in toxicity, whereas a combination of ozonation-biodegradation resulted in 89% NA removal and a 15-fold decrease in toxicity. Thus, ozonation of NA-bearing waste streams coupled with biodegradation are effective treatment processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures
NASA Astrophysics Data System (ADS)
Teunissen, Jannis; Ebert, Ute
2016-08-01
We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.
Ignition in an Atomistic Model of Hydrogen Oxidation.
Alaghemandi, Mohammad; Newcomb, Lucas B; Green, Jason R
2017-03-02
Hydrogen is a potential substitute for fossil fuels that would reduce the combustive emission of carbon dioxide. However, the low ignition energy needed to initiate oxidation imposes constraints on the efficiency and safety of hydrogen-based technologies. Microscopic details of the combustion processes, ephemeral transient species, and complex reaction networks are necessary to control and optimize the use of hydrogen as a commercial fuel. Here, we report estimates of the ignition time of hydrogen-oxygen mixtures over a wide range of equivalence ratios from extensive reactive molecular dynamics simulations. These data show that the shortest ignition time corresponds to a fuel-lean mixture with an equivalence ratio of 0.5, where the number of hydrogen and oxygen molecules in the initial mixture are identical, in good agreement with a recent chemical kinetic model. We find two signatures in the simulation data precede ignition at pressures above 200 MPa. First, there is a peak in hydrogen peroxide that signals ignition is imminent in about 100 ps. Second, we find a strong anticorrelation between the ignition time and the rate of energy dissipation, suggesting the role of thermal feedback in stimulating ignition.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Processes of Heat Transfer in Rheologically Unstable Mixtures of Organic Origin
NASA Astrophysics Data System (ADS)
Tkachenko, S. I.; Pishenina, N. V.; Rumyantseva, T. Yu.
2014-05-01
The dependence of the coefficient of heat transfer from the heat-exchange surface to a rheologically unstable organic mixture on the thermohydrodynamic state of the mixture and its prehistory has been established. A method for multivariant investigation of the process of heat transfer in compound organic mixtures has been proposed; this method makes it possible to evaluate the character and peculiarities of change in the rheological structure of the mixture as functions of the thermohydrodynamic conditions of its treatment. The possibility of evaluating the intensity of heat transfer in a biotechnological system for production of energy carriers at the step of its designing by multivariant investigation of the heat-transfer intensity in rheologically unstable organic mixtures with account of their prehistory has been shown.
Pyrolysis processing for solid waste resource recovery
NASA Technical Reports Server (NTRS)
Wojtowicz, Marek A. (Inventor); Serio, Michael A. (Inventor); Kroo, Erik (Inventor); Suuberg, Eric M. (Inventor)
2007-01-01
Solid waste resource recovery in space is effected by pyrolysis processing, to produce light gases as the main products (CH.sub.4, H.sub.2, CO.sub.2, CO, H.sub.2O, NH.sub.3) and a reactive carbon-rich char as the main byproduct. Significant amounts of liquid products are formed under less severe pyrolysis conditions, and are cracked almost completely to gases as the temperature is raised. A primary pyrolysis model for the composite mixture is based on an existing model for whole biomass materials, and an artificial neural network models the changes in gas composition with the severity of pyrolysis conditions.
NASA Astrophysics Data System (ADS)
Hieu, Nguyen Huu
2017-09-01
Pervaporation is a potential process for the final step of ethanol biofuel production. In this study, a mathematical model was developed based on the resistance-in-series model and a simulation was carried out using the specialized simulation software COMSOL Multiphysics to describe a tubular type pervaporation module with membranes for the dehydration of ethanol solution. The permeance of membranes, operating conditions, and feed conditions in the simulation were referred from experimental data reported previously in literature. Accordingly, the simulated temperature and density profiles of pure water and ethanol-water mixture were validated based on existing published data.
Characterizing Dissolved Gases in Cryogenic Liquid Fuels
NASA Astrophysics Data System (ADS)
Richardson, Ian A.
Pressure-Density-Temperature-Composition (PrhoT-x) measurements of cryogenic fuel mixtures are a historical challenge due to the difficulties of maintaining cryogenic temperatures and precision isolation of a mixture sample. For decades NASA has used helium to pressurize liquid hydrogen propellant tanks to maintain tank pressure and reduce boil off. This process causes helium gas to dissolve into liquid hydrogen creating a cryogenic mixture with thermodynamic properties that vary from pure liquid hydrogen. This can lead to inefficiencies in fuel storage and instabilities in fluid flow. As NASA plans for longer missions to Mars and beyond, small inefficiencies such as dissolved helium in liquid propellant become significant. Traditional NASA models are unable to account for dissolved helium due to a lack of fundamental property measurements necessary for the development of a mixture Equation Of State (EOS). The first PrhoT-x measurements of helium-hydrogen mixtures using a retrofitted single-sinker densimeter, magnetic suspension microbalance, and calibrated gas chromatograph are presented in this research. These measurements were used to develop the first multi-phase EOS for helium-hydrogen mixtures which was implemented into NASA's Generalized Fluid System Simulation Program (GFSSP) to determine the significance of mixture non-idealities. It was revealed that having dissolved helium in the propellant does not have a significant effect on the tank pressurization rate but does affect the rate at which the propellant temperature rises. PrhoT-x measurements are conducted on methane-ethane mixtures with dissolved nitrogen gas to simulate the conditions of the hydrocarbon seas of Saturn's moon Titan. Titan is the only known celestial body in the solar system besides Earth with stable liquid seas accessible on the surface. The PrhoT-x measurements are used to develop solubility models to aid in the design of the Titan Submarine. NASA is currently designing the submarine to explore the depths of Titan's methane-ethane seas to study the evolution of hydrocarbons in the universe and provide a pathfinder for future submersible designs. In addition, effervescence and freezing liquid line measurements on various liquid methane-ethane compositions with dissolved gaseous nitrogen are presented from 1.5 bar to 4.5 bar and temperatures from 92 K to 96 K to improve simulations of the conditions of the seas. These measurements will be used to validate sea property and bubble incipience models for the Titan Submarine design.
Sovány, Tamás; Papós, Kitti; Kása, Péter; Ilič, Ilija; Srčič, Stane; Pintye-Hódi, Klára
2013-06-01
The importance of in silico modeling in the pharmaceutical industry is continuously increasing. The aim of the present study was the development of a neural network model for prediction of the postcompressional properties of scored tablets based on the application of existing data sets from our previous studies. Some important process parameters and physicochemical characteristics of the powder mixtures were used as training factors to achieve the best applicability in a wide range of possible compositions. The results demonstrated that, after some pre-processing of the factors, an appropriate prediction performance could be achieved. However, because of the poor extrapolation capacity, broadening of the training data range appears necessary.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Response properties in the adsorption-desorption model on a triangular lattice
NASA Astrophysics Data System (ADS)
Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.
2016-06-01
The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.
Proportioning and performance evaluation of self-consolidating concrete
NASA Astrophysics Data System (ADS)
Wang, Xuhao
A well-proportioned self-consolidating concrete (SCC) mixture can be achieved by controlling the aggregate system, paste quality, and paste quantity. The work presented in this dissertation involves an effort to study and improve particle packing of the concrete system and reduce the paste quantity while maintaining concrete quality and performance. This dissertation is composed of four papers resulting from the study: (1) Assessing Particle Packing Based Self-Consolidating Concrete Mix Design; (2) Using Paste-To-Voids Volume Ratio to Evaluate the Performance of Self-Consolidating Concrete Mixtures; (3) Image Analysis Applications on Assessing Static Stability and Flowability of Self-Consolidating Concrete, and (4) Using Ultrasonic Wave Propagation to Monitor Stiffening Process of Self-Consolidating Concrete. Tests were conducted on a large matrix of SCC mixtures that were designed for cast-in-place bridge construction. The mixtures were made with different aggregate types, sizes, and different cementitious materials. In Paper 1, a modified particle-packing based mix design method, originally proposed by Brouwers (2005), was applied to the design of self-consolidating concrete (SCC) mixs. Using this method, a large matrix of SCC mixes was designed to have a particle distribution modulus (q) ranging from 0.23 to 0.29. Fresh properties (such as flowability, passing ability, segregation resistance, yield stress, viscosity, set time and formwork pressure) and hardened properties (such as compressive strength, surface resistance, shrinkage, and air structure) of these concrete mixes were experimentally evaluated. In Paper 2, a concept that is based on paste-to-voids volume ratio (Vpaste/Vvoids) was employed to assess the performance of SCC mixtures. The relationship between excess paste theory and Vpaste/Vvoids was investigated. The workability, flow properties, compressive strength, shrinkage, and surface resistivity of SCC mixtures were determined at various ages. Statistical analyses, response surface models and Tukey Honestly Significant Difference (HSD) tests, were conducted to relate the mix design parameters to the concrete performance. The work discussed in Paper 3 was to apply a digital image processing (DIP) method associated with a MATLAB algorithm to evaluate cross sectional images of self-consolidating concrete (SCC). Parameters, such as inter-particle spacing between coarse aggregate particles and average mortar to aggregate ratio defined as average mortar thickness index (MTI), were derived from DIP method and applied to evaluate the static stability and develop statistical models to predict flowability of SCC mixtures. The last paper investigated technologies available to monitor changing properties of a fresh mixture, particularly for use with self-consolidating concrete (SCC). A number of techniques were used to monitor setting time, stiffening and formwork pressure of SCC mixtures. These included longitudinal (P-wave) ultrasonic wave propagation, penetrometer based setting time, semi-adiabatic calorimetry, and formwork pressure. The first study demonstrated that the concrete mixes designed using the modified Brouwers mix design algorithm and particle packing concept had a potential to reduce up to 20% SCMs content compared to existing SCC mix proportioning methods and still maintain good performance. The second paper concluded that slump flow of the SCC mixtures increased with Vpaste/Vvoids at a given viscosity of mortar. Compressive trength increases with increasing Vpaste/Vvoids up to a point (~150%), after which the strength becomes independent of Vpaste/Vvoids, even slightly decreases. Vpaste/Vvoids has little effect on the shrinkage mixtures, while SCC mixtures tend to have a higher shrinkage than CC for a given Vpaste/Vvoids. Vpaste/Vvoids has little effects on surface resistivity of SCC mixtures. The paste quality tends to have a dominant effect. Statistical analysis is an efficient tool to identify the significance of influence factors on concrete performance. In third paper, proposed DIP method and MATLAB algorithm can be successfully used to derive inter-particle spacing and MTI, and quantitatively evaluate the static stability in hardened SCC samples. These parameters can be applied to overcome the limitations and challenges of existing theoretical frames and construct statistical models associated with rheological parameters to predict flowability of SCC mixtures. The outcome of this study can be of practical value for providing an efficient and useful tool in designing mixture proportions of SCC. Last paper compared several concrete performance measurement techniques, the P-wave test and calorimetric measurements can be efficiently used to monitor the stiffening and setting of SCC mixtures.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Numerical investigation of spray ignition of a multi-component fuel surrogate
NASA Astrophysics Data System (ADS)
Backer, Lara; Narayanaswamy, Krithika; Pepiot, Perrine
2014-11-01
Simulating turbulent spray ignition, an important process in engine combustion, is challenging, since it combines the complexity of multi-scale, multiphase turbulent flow modeling with the need for an accurate description of chemical kinetics. In this work, we use direct numerical simulation to investigate the role of the evaporation model on the ignition characteristics of a multi-component fuel surrogate, injected as droplets in a turbulent environment. The fuel is represented as a mixture of several components, each one being representative of a different chemical class. A reduced kinetic scheme for the mixture is extracted from a well-validated detailed chemical mechanism, and integrated into the multiphase turbulent reactive flow solver NGA. Comparisons are made between a single-component evaporation model, in which the evaporating gas has the same composition as the liquid droplet, and a multi-component model, where component segregation does occur. In particular, the corresponding production of radical species, which are characteristic of the ignition of individual fuel components, is thoroughly analyzed.
NASA Astrophysics Data System (ADS)
Winters, C.; Eckert, Z.; Yin, Z.; Frederickson, K.; Adamovich, I. V.
2018-01-01
This work presents the results of number density measurements of metastable Ar atoms and ground state H atoms in diluted mixtures of H2 and O2 with Ar, as well as ground state O atoms in diluted H2-O2-Ar, CH4-O2-Ar, C3H8-O2-Ar, and C2H4-O2-Ar mixtures excited by a repetitive nanosecond pulse discharge. The measurements have been made in a nanosecond pulse, double dielectric barrier discharge plasma sustained in a flow reactor between two plane electrodes encapsulated within dielectric material, at an initial temperature of 500 K and pressures ranging from 300 Torr to 700 Torr. Metastable Ar atom number density distribution in the afterglow is measured by tunable diode laser absorption spectroscopy, and used to characterize plasma uniformity. Temperature rise in the reacting flow is measured by Rayleigh scattering. H atom and O atom number densities are measured by two-photon absorption laser induced fluorescence. The results are compared with kinetic model predictions, showing good agreement, with the exception of extremely lean mixtures. O atoms and H atoms in the plasma are produced mainly during quenching of electronically excited Ar atoms generated by electron impact. In H2-Ar and O2-Ar mixtures, the atoms decay by three-body recombination. In H2-O2-Ar, CH4-O2-Ar, and C3H8-O2-Ar mixtures, O atoms decay in a reaction with OH, generated during H atom reaction with HO2, with the latter produced by three-body H atom recombination with O2. The net process of O atom decay is O + H → OH, such that the decay rate is controlled by the amount of H atoms produced in the discharge. In extra lean mixtures of propane and ethylene with O2-Ar the model underpredicts the O atom decay rate. At these conditions, when fuel is completely oxidized by the end of the discharge burst, the net process of O atom decay, O + O → O2, becomes nearly independent of H atom number density. Lack of agreement with the data at these conditions is likely due to diffusion of H atoms from the partially oxidized regions near the side walls of the reactor into the plasma. Although significant fractions of hydrogen and hydrocarbon fuels are oxidized by O atoms produced in the plasma, chain branching remains a minor effect at these relatively low temperature conditions.
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
Process for forming shaped group III-V semiconductor nanocrystals, and product formed using process
Alivisatos, A. Paul; Peng, Xiaogang; Manna, Liberato
2001-01-01
A process for the formation of shaped Group III-V semiconductor nanocrystals comprises contacting the semiconductor nanocrystal precursors with a liquid media comprising a binary mixture of phosphorus-containing organic surfactants capable of promoting the growth of either spherical semiconductor nanocrystals or rod-like semiconductor nanocrystals, whereby the shape of the semiconductor nanocrystals formed in said binary mixture of surfactants is controlled by adjusting the ratio of the surfactants in the binary mixture.
Process for forming shaped group II-VI semiconductor nanocrystals, and product formed using process
Alivisatos, A. Paul; Peng, Xiaogang; Manna, Liberato
2001-01-01
A process for the formation of shaped Group II-VI semiconductor nanocrystals comprises contacting the semiconductor nanocrystal precursors with a liquid media comprising a binary mixture of phosphorus-containing organic surfactants capable of promoting the growth of either spherical semiconductor nanocrystals or rod-like semiconductor nanocrystals, whereby the shape of the semiconductor nanocrystals formed in said binary mixture of surfactants is controlled by adjusting the ratio of the surfactants in the binary mixture.
Modeling and simulation of large scale stirred tank
NASA Astrophysics Data System (ADS)
Neuville, John R.
The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the agitation of the vessel is adequate to produce a homogenous mixture but not so high that it produces excessive erosion to internal components. The main findings reported by this study were: (1) Careful consideration of the fluid yield stress characteristic is required to make predictions of fluid flow behavior. Laminar Models can predict flow patterns and stagnant regions in the tank until full movement of the flow field occurs. Power Curves and flow patterns were developed for the full scale mixing model to show the differences in expected performance of the mixing process for a broad range of fluids that exhibit Herschel--Bulkley and Bingham Plastic flow behavior. (2) The impeller power demand is independent of the flow model selection for turbulent flow fields in the region of the impeller. The laminar models slightly over predicted the agitator impeller power demand produced by turbulent models. (3) The CFD results show that the power number produced by the mixing system is independent of size. The 40 gallon model produced the same power number results as the 9300 gallon model for the same process conditions. (4) CFD Results show that the Scale-Up of fluid motion in a 40 gallon tank should compare with fluid motion at full scale, 9300 gallons by maintaining constant impeller Tip Speed.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Process and apparatus for igniting a burner in an inert atmosphere
Coolidge, Dennis W.; Rinker, Franklin G.
1994-01-01
According to this invention there is provided a process and apparatus for the ignition of a pilot burner in an inert atmosphere without substantially contaminating the inert atmosphere. The process includes the steps of providing a controlled amount of combustion air for a predetermined interval of time to the combustor then substantially simultaneously providing a controlled mixture of fuel and air to the pilot burner and to a flame generator. The controlled mixture of fuel and air to the flame generator is then periodically energized to produce a secondary flame. With the secondary flame the controlled mixture of fuel and air to the pilot burner and the combustion air is ignited to produce a pilot burner flame. The pilot burner flame is then used to ignited a mixture of main fuel and combustion air to produce a main burner flame. The main burner flame then is used to ignite a mixture of process derived fuel and combustion air to produce products of combustion for use as an inert gas in a heat treatment process.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Role of amino acids and their Maillard mixtures with ribose in the biosilicification process
NASA Astrophysics Data System (ADS)
Kolb, Vera M.; Liesch, Patrick J.
2006-08-01
Mode of preservation of organic materials on early Earth, Mars or other extraterrestrial objects, and during the space transport on objects such as meteors, is one of the NASA's interests. This is especially true for the bio-organic materials, which could indicate life, past or present. Finding of such materials preserved in some ancient rocks, for example, could be interpreted as a biosignature. We have developed an experimental model for silicification, in which we have synthesized silica gels by reacting sodium silicate solution with various amino acids and with their mixtures with sugars, so-called Maillard mixtures. Our results indicate that these organic materials cause rapid and massive polymerization of silica. Such process may encrust organics or small organisms and thus preserve them. We have studied the gels we synthesized by the infrared (IR) spectroscopic method, and have detected small amount of the organic material in the silica gel. The gels were distinct in each case and have aged differently. In some cases, gel-sol-gel transformations were observed, which may be important for transport of both gels and the organics under prebiotic conditions. The gels obtained from the Maillard mixtures differ from those from the amino acids. Deuteration of the gels was performed in an attempt to resolve the bands in the Si-O-Si and Si-O-C region.
Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data
Couto, Joaquina; Lebreton, Mael
2016-01-01
The notion of “mixtures” has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied–for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes. PMID:27893868
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Sterilization of fermentation vessels by ethanol/water mixtures
Wyman, C.E.
1999-02-09
A method is described for sterilizing process fermentation vessels with a concentrated alcohol and water mixture integrated in a fuel alcohol or other alcohol production facility. Hot, concentrated alcohol is drawn from a distillation or other purification stage and sprayed into the empty fermentation vessels. This sterilizing alcohol/water mixture should be of a sufficient concentration, preferably higher than 12% alcohol by volume, to be toxic to undesirable microorganisms. Following sterilization, this sterilizing alcohol/water mixture can be recovered back into the same distillation or other purification stage from which it was withdrawn. The process of this invention has its best application in, but is not limited to, batch fermentation processes, wherein the fermentation vessels must be emptied, cleaned, and sterilized following completion of each batch fermentation process. 2 figs.
Separation of organic azeotropic mixtures by pervaporation. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, R.W.
1991-12-01
Distillation is a commonly used separation technique in the petroleum refining and chemical processing industries. However, there are a number of potential separations involving azetropic and close-boiling organic mixtures that cannot be separated efficiently by distillation. Pervaporation is a membrane-based process that uses selective permeation through membranes to separate liquid mixtures. Because the separation process is not affected by the relative volatility of the mixture components being separated, pervaporation can be used to separate azetropes and close-boiling mixtures. Our results showed that pervaporation membranes can be used to separate azeotropic mixtures efficiently, a result that is not achievable with simplemore » distillation. The membranes were 5--10 times more permeable to one of the components of the mixture, concentrating it in the permeate stream. For example, the membrane was 10 times more permeable to ethanol than methyl ethyl ketone, producing 60% ethanol permeate from an azeotropic mixture of ethanol and methyl ethyl ketone containing 18% ethanol. For the ethyl acetate/water mixture, the membranes showed a very high selectivity to water (> 300) and the permeate was 50--100 times enriched in water relative to the feed. The membranes had permeate fluxes on the order of 0.1--1 kg/m{sup 2}{center_dot}h in the operating range of 55--70{degrees}C. Higher fluxes were obtained by increasing the operating temperature.« less
Separation of organic azeotropic mixtures by pervaporation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, R.W.
1991-12-01
Distillation is a commonly used separation technique in the petroleum refining and chemical processing industries. However, there are a number of potential separations involving azetropic and close-boiling organic mixtures that cannot be separated efficiently by distillation. Pervaporation is a membrane-based process that uses selective permeation through membranes to separate liquid mixtures. Because the separation process is not affected by the relative volatility of the mixture components being separated, pervaporation can be used to separate azetropes and close-boiling mixtures. Our results showed that pervaporation membranes can be used to separate azeotropic mixtures efficiently, a result that is not achievable with simplemore » distillation. The membranes were 5--10 times more permeable to one of the components of the mixture, concentrating it in the permeate stream. For example, the membrane was 10 times more permeable to ethanol than methyl ethyl ketone, producing 60% ethanol permeate from an azeotropic mixture of ethanol and methyl ethyl ketone containing 18% ethanol. For the ethyl acetate/water mixture, the membranes showed a very high selectivity to water (> 300) and the permeate was 50--100 times enriched in water relative to the feed. The membranes had permeate fluxes on the order of 0.1--1 kg/m{sup 2}{center dot}h in the operating range of 55--70{degrees}C. Higher fluxes were obtained by increasing the operating temperature.« less
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
Saad, Muhammad; Tahir, Hajira
2017-05-01
The contemporary problems concerning water purification could be resolved by using nanosorbents. The present studies emphasis on the synthesis of γ-Fe 2 O 3 -activated carbon nanocomposites (γ-Fe 2 O 3 -NP-AC) by sol-gel method. The composition and surface morphology of them were studied by FTIR, EDS, SEM and XRD techniques. Moreover they were employed for the selective removal of binary mixture of dyes including reactive red 223 dye (RR) and Malachite Green dye (MG) by ultrasonic assisted adsorption method. Sonication is the act of applying sound energy to agitate particles in the sample. The ultrasonic frequencies (>20kHz) were used to agitate experimental solutions in current studies. The response surface methodology based on 5 factorial central composite design (CCD) was employed to investigate the optimum parameters of adsorption. The optimum operating parameters (OOP) including sonication time, solution pH, amount of adsorbent, concentration of RR and MG were estimated for the selective removal of mixture of dyes. On OOP conditions of RR, the % removal of RR and MG were observed to be 92.12% and 10.05% respectively. While at OOP of MG, the % removal of MG and RR were observed to be 85.32% and 32.13% from the mixture respectively. Moreover the mechanisms of adsorption of RR and MG on the γ-Fe 2 O 3 -NP-AC were also illustrated. The significance of the RR-γ-Fe 2 O 3 -NP-AC and MG-γ-Fe 2 O 3 -NP-AC adsorption models was affirmed by ANOVA test. The Pareto plots for the selective removal of the RR and MG from the binary mixture also confirm the significance of the factors. Isothermal studies were performed and RR adsorption was observed to follow Langmuir isotherm model whereas MG adsorption was observed to follow Freundlich model. Thermodynamic studies were conducted and the outcomes suggested the spontaneous nature of adsorption processes. The kinetic models were employed to study the kinetics of the process. It was observed that the system followed pseudo second order, intra-particle diffusion and Elovich models as represented by the R 2 values of the respective models. The comparative study from the previously studies revealed that the proposed method is amongst them is the most efficient method to eliminate RR and MG dyes from the aqueous medium. Therefore the current study will be useful in reducing the toxicity of RR and MG contaminated effluent. Copyright © 2016 Elsevier B.V. All rights reserved.
Cáceres, Rafaela; Coromina, Narcís; Malińska, Krystyna; Martínez-Farré, F Xavier; López, Marga; Soliva, Montserrat; Marfà, Oriol
2016-12-01
Next generation of waste management systems should apply product-oriented bioconversion processes that produce composts or biofertilisers of desired quality that can be sold in high priced markets such as horticulture. Natural acidification linked to nitrification can be promoted during composting. If nitrification is enhanced, suitable compost in terms of pH can be obtained for use in horticultural substrates. Green waste compost (GW) represents a potential suitable product for use in growing medium mixtures. However its low N provides very limited slow-release nitrogen fertilization for suitable plant growth; and GW should be composted with a complementary N-rich raw material such as the solid fraction of cattle slurry (SFCS). Therefore, it is important to determine how very different or extreme proportions of the two materials in the mixture can limit or otherwise affect the nitrification process. The objectives of this work were two-fold: (a) To assess the changes in chemical and physicochemical parameters during the prolonged composting of extreme mixtures of green waste (GW) and separated cattle slurry (SFCS) and the feasibility of using the composts as growing media. (b) To check for nitrification during composting in two different extreme mixtures of GW and SFCS and to describe the conditions under which this process can be maintained and its consequences. The physical and physicochemical properties of both composts obtained indicated that they were appropriate for use as ingredients in horticultural substrates. The nitrification process occurred in both mixtures in the medium-late thermophilic stage of the composting process. In particular, its feasibility has been demonstrated in the mixtures with a low N content. Nitrification led to the inversion of each mixture's initial pH. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gabeza, R
1995-03-01
The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
A general mixture model and its application to coastal sandbar migration simulation
NASA Astrophysics Data System (ADS)
Liang, Lixin; Yu, Xiping
2017-04-01
A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es
2013-06-01
Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Edinçliler, Ayşe; Baykal, Gökhan; Saygili, Altug
2010-06-01
Use of the processed used tires in embankment construction is becoming an accepted way of beneficially recycling scrap tires due to shortages of natural mineral resources and increasing waste disposal costs. Using these used tires in construction requires an awareness of the properties and the limitations associated with their use. The main objective of this paper is to assess the different processing techniques on the mechanical properties of used tires-sand mixtures to improve the engineering properties of the available soil. In the first part, a literature study on the mechanical properties of the processed used tires such as tire shreds, tire chips, tire buffings and their mixtures with sand are summarized. In the second part, large-scale direct shear tests are performed to evaluate shear strength of tire crumb-sand mixtures where information is not readily available in the literature. The test results with tire crumb were compared with the other processed used tire-sand mixtures. Sand-used tire mixtures have higher shear strength than that of the sand alone and the shear strength parameters depend on the processing conditions of used tires. Three factors are found to significantly affect the mechanical properties: normal stress, processing techniques, and the used tire content. Copyright 2009. Published by Elsevier Ltd.
Fourier transform infrared spectroscopy for Kona coffee authentication.
Wang, Jun; Jun, Soojin; Bittenbender, H C; Gautz, Loren; Li, Qing X
2009-06-01
Kona coffee, the variety of "Kona typica" grown in the north and south districts of Kona-Island, carries a unique stamp of the region of Big Island of Hawaii, U.S.A. The excellent quality of Kona coffee makes it among the best coffee products in the world. Fourier transform infrared (FTIR) spectroscopy integrated with an attenuated total reflectance (ATR) accessory and multivariate analysis was used for qualitative and quantitative analysis of ground and brewed Kona coffee and blends made with Kona coffee. The calibration set of Kona coffee consisted of 10 different blends of Kona-grown original coffee mixture from 14 different farms in Hawaii and a non-Kona-grown original coffee mixture from 3 different sampling sites in Hawaii. Derivative transformations (1st and 2nd), mathematical enhancements such as mean centering and variance scaling, multivariate regressions by partial least square (PLS), and principal components regression (PCR) were implemented to develop and enhance the calibration model. The calibration model was successfully validated using 9 synthetic blend sets of 100% Kona coffee mixture and its adulterant, 100% non-Kona coffee mixture. There were distinct peak variations of ground and brewed coffee blends in the spectral "fingerprint" region between 800 and 1900 cm(-1). The PLS-2nd derivative calibration model based on brewed Kona coffee with mean centering data processing showed the highest degree of accuracy with the lowest standard error of calibration value of 0.81 and the highest R(2) value of 0.999. The model was further validated by quantitative analysis of commercial Kona coffee blends. Results demonstrate that FTIR can be a rapid alternative to authenticate Kona coffee, which only needs very quick and simple sample preparations.
NASA Astrophysics Data System (ADS)
Orekhova, T. N.; Nosov, O. A.; Prokopenko, V. S.; Kachaev, A. E.
2018-03-01
The improvement of the design of the pneumatic mixers aimed at the possibility of obtaining homogeneous disperse systems, while the resource and energy saving issues play an important role in the conditions of enterprises that use this type of equipment in their technological chain, is described in the article.
Computational study of sheath structure in oxygen containing plasmas at medium pressures
NASA Astrophysics Data System (ADS)
Hrach, Rudolf; Novak, Stanislav; Ibehej, Tomas; Hrachova, Vera
2016-09-01
Plasma mixtures containing active species are used in many plasma-assisted material treatment technologies. The analysis of such systems is rather difficult, as both physical and chemical processes affect plasma properties. A combination of experimental and computational approaches is the best suited, especially at higher pressures and/or in chemically active plasmas. The first part of our study of argon-oxygen mixtures was based on experimental results obtained in the positive column of DC glow discharge. The plasma was analysed by the macroscopic kinetic approach which is based on the set of chemical reactions in the discharge. The result of this model is a time evolution of the number densities of each species. In the second part of contribution the detailed analysis of processes taking place during the interaction of oxygen containing plasma with immersed substrates was performed, the results of the first model being the input parameters. The used method was the particle simulation technique applied to multicomponent plasma. The sheath structure and fluxes of charged particles to substrates were analysed in the dependence on plasma pressure, plasma composition and surface geometry.
Modeling sports highlights using a time-series clustering framework and model interpretation
NASA Astrophysics Data System (ADS)
Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay
2005-01-01
In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.
Process for the separation of components from gas mixtures
Merriman, J.R.; Pashley, J.H.; Stephenson, M.J.; Dunthorn, D.I.
1973-10-01
A process for the removal, from gaseous mixtures of a desired component selected from oxygen, iodine, methyl iodide, and lower oxides of carbon, nitrogen, and sulfur is described. The gaseous mixture is contacted with a liquid fluorocarbon in an absorption zone maintained at superatmospheric pressure to preferentially absorb the desired component in the fluorocarbon. Unabsorbed constituents of the gaseous mixture are withdrawn from the absorption zone. Liquid fluorocarbon enriched in the desired component is withdrawn separately from the zone, following which the desired component is recovered from the fluorocarbon absorbent. (Official Gazette)
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.
Valto, Piia; Knuutinen, Juha; Alén, Raimo
2011-04-01
A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Supercritical fluid extraction. Principles and practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, M.A.; Krukonis, V.J.
This book is a presentation of the fundamentals and application of super-critical fluid solvents (SCF). The authors cover virtually every facet of SCF technology: the history of SCF extraction, its underlying thermodynamic principles, process principles, industrial applications, and analysis of SCF research and development efforts. The thermodynamic principles governing SCF extraction are covered in depth. The often complex three-dimensional pressure-temperature composition (PTx) phase diagrams for SCF-solute mixtures are constructed in a coherent step-by-step manner using the more familiar two-dimensional Px diagrams. The experimental techniques used to obtain high pressure phase behavior information are described in detail and the advantages andmore » disadvantages of each technique are explained. Finally, the equations used to model SCF-solute mixtures are developed, and modeling results are presented to highlight the correlational strengths of a cubic equation of state.« less
Yuan, Dawei; Rao, Kripa; Varanasi, Sasidhar; Relue, Patricia
2012-08-01
A system that incorporates a packed bed reactor for isomerization of xylose and a hollow fiber membrane fermentor (HFMF) for sugar fermentation by yeast was developed for facile recovery of the xylose isomerase enzyme pellets and reuse of the cartridge loaded with yeast. Fermentation of pre-isomerized poplar hydrolysate produced using ionic liquid pretreatment in HFMF resulted in ethanol yields equivalent to that of model sugar mixtures of xylose and glucose. By recirculating model sugar mixtures containing partially isomerized xylose through the packed bed and the HFMF connected in series, 39 g/l ethanol was produced within 10h with 86.4% xylose utilization. The modular nature of this configuration has the potential for easy scale-up of the simultaneous isomerization and fermentation process without significant capital costs. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandler, S.I.
1986-01-01
The objective of the work is to use the generalized van der Waals theory, as derived earlier (''The Generalized van der Waals Partition Function I. Basic Theory'' by S.I. Sandler, Fluid Phase Equilibria 19, 233 (1985)) to: (1) understand the molecular level assumptions inherent in current thermodynamic models; (2) use theory and computer simulation studies to test these assumptions; and (3) develop new, improved thermodynamic models based on better molecular level assumptions. From such a fundamental study, thermodynamic models will be developed that will be applicable to mixtures of molecules of widely different size and functionality, as occurs in themore » processing of heavy oils, coal liquids and other synthetic fuels. An important aspect of our work is to reduce our fundamental theoretical developments to engineering practice through extensive testing and evaluation with experimental data on real mixtures. During the first year of this project important progress was made in the areas specified in the original proposal, as well as several subsidiary areas identified as the work progressed. Some of this work has been written up and submitted for publication. Manuscripts acknowledging DOE support, together with a very brief description, are listed herein.« less
Nano-particle dynamics during capillary suction.
Kuijpers, C J; Huinink, H P; Tomozeiu, N; Erich, S J F; Adan, O C G
2018-07-01
Due to the increased use of nanoparticles in everyday applications, there is a need for theoretical descriptions of particle transport and attachment in porous media. It should be possible to develop a one dimensional model to describe nanoparticle retention during capillary transport of liquid mixtures in porous media. Water-glycerol-nanoparticle mixtures were prepared and the penetration process in porous Al 2 O 3 samples of varying pore size is measured using NMR imaging. The liquid and particle front can be measured by utilizing T 2 relaxation effects from the paramagnetic nanoparticles. A good agreement between experimental data and the predicted particle retention by the developed theory is found. Using the model, the binding constant for Fe 2 O 3 nanoparticles on sintered Al 2 O 3 samples and the maximum surface coverage are determined. Furthermore, we show that the penetrating liquid front follows a square root of time behavior as predicted by Darcy's law. However, scaling with the liquid parameters is no longer sufficient to map different liquid mixtures onto a single master curve. The Darcy model should be extended to address the two formed domains (with and without particles) and their interaction, to give an accurate prediction for the penetrating liquid front. Copyright © 2018 Elsevier Inc. All rights reserved.
Development of briquette fuel from cashew shells and rice husk mixture
NASA Astrophysics Data System (ADS)
Yohana, Eflita; Arijanto, Kalyana, Ivan Edgar; Lazuardi, Andy
2017-01-01
In Indonesia, a large amount of biomasses are available from cashew plantations and rice fields and constitute one of the raw material sources for thermal energy. Annually, 130.052 tons of whole cashews can produce cashew shells with a total energy content of 4,933x109 kcal. In addition, 49 million tons of rice is produced annually in Indonesia. From this sum, 7.5-10 million tons of rice husks are obtained with a total energy content of 2.64x1013 kcal. The purpose of this research is to review the briquette of biomass made from a mixture of cashew shells and rice husks with polyvinyl acetate (PVA) as the adhesive. The mixture ratio of cashew shells and rice husks is varied with a range of 75:25, 50:50, and 25:75 % weight. Briquettes are made in a cylinder mold and pressed using a hydraulic press machine. The pressing pressure varies from 2.500, 5.000, and 7.500 kg/m2. Results show that a briquette with a mixture ratio of 75:25 % shows good pressure tenacity. A model is used to relate density with briquetting pressure. This model shows that the briquette has low compressibility at 0.13. Enhancement of the heating value for the briquettes is also carried out using the torrefaction treatment. The torrefaction process produces biomass briquettes made from a mixture of cashew shells and rice husks with a heating value on par to sub-bituminous coal according to the ASTM D 388 standard classification with a heating value of 6.712 kcal/kg.
Response Times to Gustatory-Olfactory Flavor Mixtures: Role of Congruence.
Shepard, Timothy G; Veldhuizen, Maria G; Marks, Lawrence E
2015-10-01
A mixture of perceptually congruent gustatory and olfactory flavorants (sucrose and citral) was previously shown to be detected faster than predicted by a model of probability summation that assumes stochastically independent processing of the individual gustatory and olfactory signals. This outcome suggests substantial integration of the signals. Does substantial integration also characterize responses to mixtures of incongruent flavorants? Here, we report simple response times (RTs) to detect brief pulses of 3 possible flavorants: monosodium glutamate, MSG (gustatory: "umami" quality), citral (olfactory: citrus quality), and a mixture of MSG and citral (gustatory-olfactory). Each stimulus (and, on a fraction of trials, water) was presented orally through a computer-operated, automated flow system, and subjects were instructed to press a button as soon as they detected any of the 3 non-water stimuli. Unlike responses previously found to the congruent mixture of sucrose and citral, responses here to the incongruent mixture of MSG and citral took significantly longer (RTs were greater) and showed lower detection rates than the values predicted by probability summation. This outcome suggests that the integration of gustatory and olfactory flavor signals is less extensive when the component flavors are perceptually incongruent rather than congruent, perhaps because incongruent flavors are less familiar. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models
ERIC Educational Resources Information Center
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol
2016-01-01
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
ERIC Educational Resources Information Center
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
Analyses of the chemical composition of complex DBP mixtures, produced by different drinking water treatment processes, are essential to generate toxicity data required for assessing their risks to humans. For mixture risk assessments, whole mixture toxicology studies generally a...
Analyses of the chemical composition of complex DBP mixtures, produced by different drinking water treatment processes, are essential to generate toxicity data required for assessing their risks to humans. For mixture risk assessments, whole mixture toxicology studies generally a...
NASA Astrophysics Data System (ADS)
Ballarini, E.; Graupner, B.; Bauer, S.
2015-12-01
For deep geological repositories of high-level radioactive waste (HLRW), bentonite and sand bentonite mixtures are investigated as buffer materials to form a a sealing layer. This sealing layer surrounds the canisters and experiences an initial drying due to the heat produced by HLRW and a successive re-saturation with fluid from the host rock. These complex thermal, hydraulic and mechanical processes interact and were investigated in laboratory column experiments using MX-80 clay pellets as well as a mixture of 35% sand and 65% bentonite. The aim of this study is to both understand the individual processes taking place in the buffer materials and to identify the key physical parameters that determine the material behavior under heating and hydrating conditions. For this end, detailed and process-oriented numerical modelling was applied to the experiments, simulating heat transport, multiphase flow and mechanical effects from swelling. For both columns, the same set of parameters was assigned to the experimental set-up (i.e. insulation, heater and hydration system), while the parameters of the buffer material were adapted during model calibration. A good fit between model results and data was achieved for temperature, relative humidity, water intake and swelling pressure, thus explaining the material behavior. The key variables identified by the model are the permeability and relative permeability, the water retention curve and the thermal conductivity of the buffer material. The different hydraulic and thermal behavior of the two buffer materials observed in the laboratory observations was well reproduced by the numerical model.
Effects of three veterinary antibiotics and their binary mixtures on two green alga species.
Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A
2018-03-01
The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Anomalous neuronal responses to fluctuated inputs
NASA Astrophysics Data System (ADS)
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain.
Separating Iso-Propanol-Toluene mixture by azeotropic distillation
NASA Astrophysics Data System (ADS)
Iqbal, Asma; Ahmad, Syed Akhlaq
2018-05-01
The separation of Iso-Propanol-Toluene azeotropic mixture using Acetone as an entrainer has been simulated on Aspen Plus software package using rigorous methods. Calculations of the vapor-liquid equilibrium for the binary system are done using UNIQUAC-RK model which gives a good agreement with the experimental data reported in literature. The effects of the Reflux ratio (RR), distillate-to-feed molar ratio (D/F), feed stage, solvent feed stage, Total no. of stages and solvent feed temperature on the product purities and recoveries are studied to obtain their optimum values that give the maximum purity and recovery of products. The configuration consists of 20 theoretical stages with an equimolar feed of binary mixture. The desired separation of binary mixture has been achieved at the feed stage and an entrainer feeding stage of 15 and 12 respectively with the reflux ratios of 2.5 and 4.0, and D/F ratio of 0.75 and 0.54 respectively in the two columns. The simulation results thus obtained are useful to setup the optimal column configuration of the azeotropic distillation process.
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
NASA Astrophysics Data System (ADS)
Rao, R. R.
2015-12-01
Aerosol radiative forcing estimates with high certainty are required in climate change studies. The approach in estimating the aerosol radiative forcing by using the chemical composition of aerosols is not effective as the chemical composition data with radiative properties are not widely available. In this study we look into the approach where ground based spectral radiation flux measurements along with an RT model is used to estimate radiative forcing. Measurements of spectral flux were made using an ASD spectroradiometer with 350 - 1050 nm wavelength range and 3nm resolution for around 54 clear-sky days during which AOD range was around 0.1 to 0.7. Simultaneous measurements of black carbon were also made using Aethalometer (Magee Scientific) which ranged from around 1.5 ug/m3 to 8 ug/m3. All the measurements were made in the campus of Indian Institute of Science which is in the heart of Bangalore city. The primary study involved in understanding the sensitivity of spectral flux to change in the mass concentration of individual aerosol species (Optical properties of Aerosols and Clouds -OPAC classified aerosol species) using the SBDART RT model. This made us clearly distinguish the region of influence of different aerosol species on the spectral flux. Following this, a new technique has been introduced to estimate an optically equivalent mixture of aerosol species for the given location. The new method involves an iterative process where the mixture of aerosol species are changed in OPAC model and RT model is run as long as the mixture which mimics the measured spectral flux within 2-3% deviation from measured spectral flux is obtained. Using the optically equivalent aerosol mixture and RT model aerosol radiative forcing is estimated. The new method is limited to clear sky scenes and its accuracy to derive an optically equivalent aerosol mixture reduces when diffuse component of flux increases. Our analysis also showed that direct component of spectral flux is more sensitive to different aerosol species than total spectral flux which was also supported by our observed data.
The effect of feed composition on anaerobic co-digestion of animal-processing by-products.
Hidalgo, D; Martín-Marroquín, J M; Corona, F
2018-06-15
Four streams and their mixtures have been considered for anaerobic co-digestion, all of them generated during pig carcasses processing or in related industrial activities: meat flour (MF), process water (PW), pig manure (PM) and glycerin (GL). Biochemical methane potential assays were conducted at 37 °C to evaluate the effects of the substrate mix ratio on methane generation and process behavior. The results show that the co-digestion of these products favors the anaerobic fermentation process when limiting the amount of meat flour in the mixture to co-digest, which should not exceed 10%. The ratio of other tested substrates is less critical, because different mixtures reach similar values of methane generation. The presence in the mixture of process water contributes to a quick start of the digester, something very interesting when operating an industrial reactor. The analysis of the fraction digested reveals that the four analyzed streams can be, a priori, suitable for agronomic valorization once digested. Copyright © 2017. Published by Elsevier Ltd.
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Separation processes using expulsion from dilute supercritical solutions
Cochran, Jr., Henry D.
1993-01-01
A process for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.
A New Model for Simulating Gas Metal Arc Welding based on Phase Field Model
NASA Astrophysics Data System (ADS)
Jiang, Yongyue; Li, Li; Zhao, Zhijiang
2017-11-01
Lots of physical process, such as metal melting, multiphase fluids flow, heat and mass transfer and thermocapillary effect (Marangoni) and so on, will occur in gas metal arc welding (GMAW) which should be considered as a mixture system. In this paper, based on the previous work, we propose a new model to simulate GMAW including Navier-Stokes equation, the phase field model and energy equation. Unlike most previous work, we take the thermocapillary effect into the phase field model considering mixture energy which is different of volume of fluid method (VOF) widely used in GMAW before. We also consider gravity, electromagnetic force, surface tension, buoyancy effect and arc pressure in momentum equation. The spray transfer especially the projected transfer in GMAW is computed as numerical examples with a continuous finite element method and a modified midpoint scheme. Pulse current is set as welding current as the numerical example to show the numerical simulation of metal transfer which fits the theory of GMAW well. From the result compared with the data of high-speed photography and VOF model, the accuracy and stability of the model and scheme are easily validated and also the new model has the higher precieion.
CFD Analysis of nanofluid forced convection heat transport in laminar flow through a compact pipe
NASA Astrophysics Data System (ADS)
Yu, Kitae; Park, Cheol; Kim, Sedon; Song, Heegun; Jeong, Hyomin
2017-08-01
In the present paper, developing laminar forced convection flows were numerically investigated by using water-Al2O3 nano-fluid through a circular compact pipe which has 4.5mm diameter. Each model has a steady state and uniform heat flux (UHF) at the wall. The whole numerical experiments were processed under the Re = 1050 and the nano-fluid models were made by the Alumina volume fraction. A single-phase fluid models were defined through nano-fluid physical and thermal properties calculations, Two-phase model(mixture granular model) were processed in 100nm diameter. The results show that Nusselt number and heat transfer rate are improved as the Al2O3 volume fraction increased. All of the numerical flow simulations are processed by the FLUENT. The results show the increment of thermal transfer from the volume fraction concentration.
Energy-Efficient Bioalcohol Recovery by Gel Stripping
NASA Astrophysics Data System (ADS)
Godbole, Rutvik; Ma, Lan; Hedden, Ronald
2014-03-01
Design of energy-efficient processes for recovering butanol and ethanol from dilute fermentations is a key challenge facing the biofuels industry due to the high energy consumption of traditional multi-stage distillation processes. Gel stripping is an alternative purification process by which a dilute alcohol is stripped from the fermentation product by passing it through a packed bed containing particles of a selectively absorbent polymeric gel material. The gel must be selective for the alcohol, while swelling to a reasonable degree in dilute alcohol-water mixtures. To accelerate materials optimization, a combinatorial approach is taken to screen a matrix of copolymer gels having orthogonal gradients in crosslinker concentration and hydrophilicity. Using a combination of swelling in pure solvents, the selectivity and distribution coefficients of alcohols in the gels can be predicted based upon multi-component extensions of Flory-Rehner theory. Predictions can be validated by measuring swelling in water/alcohol mixtures and conducting h HPLC analysis of the external liquid. 95% + removal of butanol from dilute aqueous solutions has been demonstrated, and a mathematical model of the unsteady-state gel stripping process has been developed. NSF CMMI Award 1335082.
Investigating the principles of recrystallization from glyceride melts.
Windbergs, Maike; Strachan, Clare J; Kleinebudde, Peter
2009-01-01
Different lipids were melted and resolidified as model systems to gain deeper insight into the principles of recrystallization processes in lipid-based dosage forms. Solid-state characterization was performed on the samples with differential scanning calorimetry and X-ray powder diffraction. Several recrystallization processes could be identified during storage of the lipid layers. Pure triglycerides that generally crystallize to the metastable alpha-form from the melt followed by a recrystallization process to the stable beta-form with time showed a chain-length-dependent behavior during storage. With increasing chain length, the recrystallization to the stable beta-form was decelerated. Partial glycerides exhibited a more complex recrystallization behavior due to the fact that these substances are less homogenous. Mixtures of a long-chain triglyceride and a partial glyceride showed evidence of some interaction between the two components as the partial glyceride hindered the recrystallization of the triglyceride to the stable beta-form. In addition, the extent of this phenomenon depended on the amount of partial glyceride in the mixture. Based on these results, changes in solid dosage forms based on glycerides during processing and storage can be better understood.
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures
National Institute of Standards and Technology Data Gateway
SRD 73 NIST REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures (PC database for purchase) REFLEAK estimates composition changes of zeotropic mixtures in leak and recharge processes.
Rivero, Javier; Henríquez-Hernández, Luis Alberto; Luzardo, Octavio P; Pestano, José; Zumbado, Manuel; Boada, Luis D; Valerón, Pilar F
2016-03-30
Organochlorine pesticides (OCs) have been associated with breast cancer development and progression, but the mechanisms underlying this phenomenon are not well known. In this work, we evaluated the effects exerted on normal human mammary epithelial cells (HMEC) by the OC mixtures most frequently detected in healthy women (H-mixture) and in women diagnosed with breast cancer (BC-mixture), as identified in a previous case-control study developed in Spain. Cytotoxicity and gene expression profile of human kinases (n=68) and non-kinases (n=26) were tested at concentrations similar to those described in the serum of those cases and controls. Although both mixtures caused a down-regulation of genes involved in the ATP binding process, our results clearly indicate that both mixtures may exert a very different effect on the gene expression profile of HMEC. Thus, while BC-mixture up-regulated the expression of oncogenes associated to breast cancer (GFRA1 and BHLHB8), the H-mixture down-regulated the expression of tumor suppressor genes (EPHA4 and EPHB2). Our results indicate that the composition of the OC mixture could play a role in the initiation processes of breast cancer. In addition, the present results suggest that subtle changes in the composition and levels of pollutants involved in environmentally relevant mixtures might induce very different biological effects, which explain, at least partially, why some mixtures seem to be more carcinogenic than others. Nonetheless, our findings confirm that environmentally relevant pollutants may modulate the expression of genes closely related to carcinogenic processes in the breast, reinforcing the role exerted by environment in the regulation of genes involved in breast carcinogenesis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Risk assessment of occupational exposure to heavy metal mixtures: a study protocol.
Omrane, Fatma; Gargouri, Imed; Khadhraoui, Moncef; Elleuch, Boubaker; Zmirou-Navier, Denis
2018-03-05
Sfax is a very industrialized city located in the southern region of Tunisia where heavy metals (HMs) pollution is now an established matter of fact. The health of its residents mainly those engaged in industrial metals-based activities is under threat. Indeed, such workers are being exposed to a variety of HMs mixtures, and this exposure has cumulative properties. Whereas current HMs exposure assessment is mainly carried out using direct air monitoring approaches, the present study aims to assess health risks associated with chronic occupational exposure to HMs in industry, using a modeling approach that will be validated later on. To this end, two questionnaires were used. The first was an identification/descriptive questionnaire aimed at identifying, for each company: the specific activities, materials used, manufactured products and number of employees exposed. The second related to the job-task of the exposed persons, workplace characteristics (dimensions, ventilation, etc.), type of metals and emission configuration in space and time. Indoor air HMs concentrations were predicted, based on the mathematical models generally used to estimate occupational exposure to volatile substances (such as solvents). Later on, and in order to validate the adopted model, air monitoring will be carried out, as well as some biological monitoring aimed at assessing HMs excretion in the urine of workers volunteering to participate. Lastly, an interaction-based hazard index HI int and a decision support tool will be used to predict the cumulative risk assessment for HMs mixtures. One hundred sixty-one persons working in the 5 participating companies have been identified. Of these, 110 are directly engaged with HMs in the course of the manufacturing process. This model-based prediction of occupational exposure represents an alternative tool that is both time-saving and cost-effective in comparison with direct air monitoring approaches. Following validation of the different models according to job processes, via comparison with direct measurements and exploration of correlations with biological monitoring, these estimates will allow a cumulative risk characterization.
2008-03-01
significant role they play in the ecosystem. The cyanobacteria distinguished in the model are the bloom -forming species found in the tidal, freshwater...phytoplankton that produce an annual bloom in the saline portions of the bay and tributaries. Diatoms are distinguished by their requirement of silica as...represent the mixture that characterizes saline waters during summer and autumn and fresh waters year round. Non- bloom -forming diatoms comprise a
Assessment of the Content of Fluorescent Tracer in Granular Feed Mixture.
Matuszek, Dominika B; Wojtkiewicz, Krystian
2018-05-03
Background: This paper describes the use of fluorescence induced by UV radiation to evaluate the share of tracer in feed mixture. Methods: For the purpose of this study, three substances were used. They are as follows: Tinopal, Rhodamine B, and Uranine. Tracer in the form of maize or kardi was added to chicken feed before the mixing process. Grains used in the process were grinded in the mill sieve with a mesh size of 4 and 6 mm. The drawn samples of the mixture were illuminated with UV radiation to make grain tracer light, and then the photo was taken with a digital camera. The acquired images were analyzed with the use of a computer program running on the RGB color model, which was the way to obtain essential information about the percentage share of tracer. Results: It was observed that, in the case of kardi grains, the proposed method gives results significantly deviating from the verification method. Conclusions: Only the tests with the use of maize having an average particle diameter of 2.4 mm and tinted with the solution of Rhodamine B led to acceptable results (consensual with the predetermined verification level).
Nonnormality and Divergence in Posttreatment Alcohol Use
Witkiewitz, Katie; van der Maas, Han L. J.; Hufford, Michael R.; Marlatt, G. Alan
2007-01-01
Alcohol lapses are the modal outcome following treatment for alcohol use disorders, yet many alcohol researchers have encountered limited success in the prediction and prevention of relapse. One hypothesis is that lapses are unpredictable, but another possibility is the complexity of the relapse process is not captured by traditional statistical methods. Data from Project Matching Alcohol Treatments to Client Heterogeneity (Project MATCH), a multisite alcohol treatment study, were reanalyzed with 2 statistical methodologies: catastrophe and 2-part growth mixture modeling. Drawing on previous investigations of self-efficacy as a dynamic predictor of relapse, the current study revisits the self-efficacy matching hypothesis, which was not statistically supported in Project MATCH. Results from both the catastrophe and growth mixture analyses demonstrated a dynamic relationship between self-efficacy and drinking outcomes. The growth mixture analyses provided evidence in support of the original matching hypothesis: Individuals with lower self-efficacy who received cognitive behavior therapy drank far less frequently than did those with low self-efficacy who received motivational therapy. These results highlight the dynamical nature of the relapse process and the importance of the use of methodologies that accommodate this complexity when evaluating treatment outcomes. PMID:17516769
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Modeling of non-thermal plasma in flammable gas mixtures
NASA Astrophysics Data System (ADS)
Napartovich, A. P.; Kochetov, I. V.; Leonov, S. B.
2008-07-01
An idea of using plasma-assisted methods of fuel ignition is based on non-equilibrium generation of chemically active species that speed up the combustion process. It is believed that gain in energy consumed for combustion acceleration by plasmas is due to the non-equilibrium nature of discharge plasma, which allows radicals to be produced in an above-equilibrium amount. Evidently, the size of the effect is strongly dependent on the initial temperature, pressure, and composition of the mixture. Of particular interest is comparison between thermal ignition of a fuel-air mixture and non-thermal plasma initiation of the combustion. Mechanisms of thermal ignition in various fuel-air mixtures have been studied for years, and a number of different mechanisms are known providing an agreement with experiments at various conditions. The problem is -- how to conform thermal chemistry approach to essentially non-equilibrium plasma description. The electric discharge produces much above-equilibrium amounts of chemically active species: atoms, radicals and ions. The point is that despite excess concentrations of a number of species, total concentration of these species is far below concentrations of the initial gas mixture. Therefore, rate coefficients for reactions of these discharge produced species with other gas mixture components are well known quantities controlled by the translational temperature, which can be calculated from the energy balance equation taking into account numerous processes initiated by plasma. A numerical model was developed combining traditional approach of thermal combustion chemistry with advanced description of the plasma kinetics based on solution of electron Boltzmann equation. This approach allows us to describe self-consistently strongly non-equilibrium electric discharge in chemically unstable (ignited) gas. Equations of pseudo-one-dimensional gas dynamics were solved in parallel with a system of thermal chemistry equations, kinetic equations for charged particles (electrons, positive and negative ions), and with the electric circuit equation. The electric circuit comprises power supply, ballast resistor connected in series with the discharge and capacity. Rate coefficients for electron-assisted reactions were calculated from solving the two-term spherical harmonic expansion of the Boltzmann equation. Such an approach allows us to describe influence of thermal chemistry reactions (burning) on the discharge characteristics. Results of comparison between the discharge and thermal ignition effects for mixtures of hydrogen or ethylene with dry air will be reported. Effects of acceleration of ignition by discharge plasma will be analyzed. In particular, the role of singlet oxygen produced effectively in the discharge in ignition speeding up will be discussed.
Thresholding functional connectomes by means of mixture modeling.
Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F
2018-05-01
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation
NASA Astrophysics Data System (ADS)
Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter
2016-11-01
Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.
Model of experts for decision support in the diagnosis of leukemia patients.
Corchado, Juan M; De Paz, Juan F; Rodríguez, Sara; Bajo, Javier
2009-07-01
Recent advances in the field of biomedicine, specifically in the field of genomics, have led to an increase in the information available for conducting expression analysis. Expression analysis is a technique used in transcriptomics, a branch of genomics that deals with the study of messenger ribonucleic acid (mRNA) and the extraction of information contained in the genes. This increase in information is reflected in the exon arrays, which require the use of new techniques in order to extract the information. The purpose of this study is to provide a tool based on a mixture of experts model that allows the analysis of the information contained in the exon arrays, from which automatic classifications for decision support in diagnoses of leukemia patients can be made. The proposed model integrates several cooperative algorithms characterized for their efficiency for data processing, filtering, classification and knowledge extraction. The Cancer Institute of the University of Salamanca is making an effort to develop tools to automate the evaluation of data and to facilitate de analysis of information. This proposal is a step forward in this direction and the first step toward the development of a mixture of experts tool that integrates different cognitive and statistical approaches to deal with the analysis of exon arrays. The mixture of experts model presented within this work provides great capacities for learning and adaptation to the characteristics of the problem in consideration, using novel algorithms in each of the stages of the analysis process that can be easily configured and combined, and provides results that notably improve those provided by the existing methods for exon arrays analysis. The material used consists of data from exon arrays provided by the Cancer Institute that contain samples from leukemia patients. The methodology used consists of a system based on a mixture of experts. Each one of the experts incorporates novel artificial intelligence techniques that improve the process of carrying out various tasks such as pre-processing, filtering, classification and extraction of knowledge. This article will detail the manner in which individual experts are combined so that together they generate a system capable of extracting knowledge, thus permitting patients to be classified in an automatic and efficient manner that is also comprehensible for medical personnel. The system has been tested in a real setting and has been used for classifying patients who suffer from different forms of leukemia at various stages. Personnel from the Cancer Institute supervised and participated throughout the testing period. Preliminary results are promising, notably improving the results obtained with previously used tools. The medical staff from the Cancer Institute considers the tools that have been developed to be positive and very useful in a supporting capacity for carrying out their daily tasks. Additionally the mixture of experts supplies a tool for the extraction of necessary information in order to explain the associations that have been made in simple terms. That is, it permits the extraction of knowledge for each classification made and generalized in order to be used in subsequent classifications. This allows for a large amount of learning and adaptation within the proposed system.
Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong
2012-09-01
We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Some comments on thermodynamic consistency for equilibrium mixture equations of state
Grove, John W.
2018-03-28
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
Liao, Lifu; Yang, Jing; Yuan, Jintao
2007-05-15
A new spectrophotometric titration method coupled with chemometrics for the simultaneous determination of mixtures of weak acids has been developed. In this method, the titrant is a mixture of sodium hydroxide and an acid-base indicator, and the indicator is used to monitor the titration process. In a process of titration, both the added volume of titrant and the solution acidity at each titration point can be obtained simultaneously from an absorption spectrum by least square algorithm, and then the concentration of each component in the mixture can be obtained from the titration curves by principal component regression. The method only needs the information of absorbance spectra to obtain the analytical results, and is free of volumetric measurements. The analyses are independent of titration end point and do not need the accurate values of dissociation constants of the indicator and the acids. The method has been applied to the simultaneous determination of the mixtures of benzoic acid and salicylic acid, and the mixtures of phenol, o-chlorophenol and p-chlorophenol with satisfactory results.
Separation processes using expulsion from dilute supercritical solutions
Cochran, H.D. Jr.
1993-04-20
A process is described for separating isotopes as well as other mixtures by utilizing the behavior of dilute repulsive or weakly attractive elements of the mixtures as the critical point of the solvent is approached.
Phenol removal pretreatment process
Hames, Bonnie R.
2004-04-13
A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
NASA Astrophysics Data System (ADS)
Hashib, S. Abd; Rosli, H.; Suzihaque, M. U. H.; Zaki, N. A. Md; Ibrahim, U. K.
2017-06-01
The ability of spray dryer in producing full cream milk at different inlet temperatures and the effectiveness of empirical model used in order to interpret the drying process data is evaluated in this study. In this study, a lab-scale spray dryer was used to dry full cream milk into powder with inlet temperature from 100 to 160°C with a constant pump speed 4rpm. Peleg empirical model was chosen in order to manipulate the drying data into the mathematical equation. This research was carry out specifically to determine the equilibrium moisture content of full cream milk powder at various inlet temperature and to evaluate the effectiveness of Peleg empirical model equation in order to describe the moisture sorption curves for full cream milk. There were two conditions set for this experiments; in the first condition (C1), further drying process of milk powder in the oven at 98°C to 100°C while the second condition (C2) is mixing the milk powder with different salt solutions like Magnesium Chloride (MgCl), Potassium Nitrite (KNO2), Sodium Nitrite (NaNO2) and Ammonium Sulfate ((NH4)2SO4). For C1, the optimum temperature were 160°C with equilibrium moisture content at 3.16 weight dry basis and slowest sorption rates (dM/dt) at 0.0743 weight dry basis/hr. For C2, the best temperature for the mixture of dry samples with MgCl is at 115°C with equilibrium moisture content and sorption rates is -78.079 weight dry basis and 0.01 weight dry basis/hr. The best temperature for the mixture of milk powder with KNO2 is also at 115°C with equilibrium moisture content and sorption rates at -83.9645 weight dry basis and 0.0008 weight dry basis/hr respectively. For mixture of dry samples with NaNO2, the best temperature is 160°C with equilibrium moisture content and sorption rates at 84.1306 weight dry basis and 0.0013 weight dry basis/hr respectively. Lastly, the mixture of dry samples with ((NH4)2SO4 where the best temperature is at 115°C with equilibrium moisture content -83.8778 weight dry basis and sorption rates at 0.0021 weight dry basis/hr. The best temperature selected best on the lowest moisture content formed and also the slowest sorption rates.
NASA Astrophysics Data System (ADS)
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers
NASA Astrophysics Data System (ADS)
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
2018-03-01
The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Modelling stock order flows with non-homogeneous intensities from high-frequency data
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.
2013-10-01
A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).
NASA Astrophysics Data System (ADS)
Musakaev, N. G.; Khasanov, M. K.; Rafikova, G. R.
2018-03-01
The problem of the replacement of methane in its hydrate by carbon dioxide in a porous medium is considered. The gas-exchange kinetics scheme is proposed in which the intensity of the process is limited by the diffusion of CO2 through the hydrate layer formed between the gas mixture flow and the CH4 hydrate. Dynamics of the main parameters of the process is numerically investigated. The main characteristic stages of the process are determined.
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard
2016-08-01
Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.
NASA Astrophysics Data System (ADS)
Artemov, V. I.; Minko, K. B.; Yan'kov, G. G.; Kiryukhin, A. V.
2016-05-01
A mathematical model was developed to be used for numerical analysis of heat and mass transfer processes in the experimental section of the air condenser (ESAC) created in the Scientific Production Company (SPC) "Turbocon" and mounted on the territory of the All-Russia Thermal Engineering Institute. The simulations were performed using the author's CFD code ANES. The verification of the models was carried out involving the experimental data obtained in the tests of ESAC. The operational capability of the proposed models to calculate the processes in steam-air mixture and cooling air and algorithms to take into account the maldistribution in the various rows of tube bundle was shown. Data on the influence of temperature and flow rate of the cooling air on the pressure in the upper header of ESAC, effective heat transfer coefficient, steam flow distribution by tube rows, and the dimensions of the ineffectively operating zones of tube bundle for two schemes of steam-air mixture flow (one-pass and two-pass ones) were presented. It was shown that the pressure behind the turbine (in the upper header) increases significantly at increase of the steam flow rate and reduction of the flow rate of cooling air and its temperature rise, and the maximum value of heat transfer coefficient is fully determined by the flow rate of cooling air. Furthermore, the steam flow rate corresponding to the maximum value of heat transfer coefficient substantially depends on the ambient temperature. The analysis of the effectiveness of the considered schemes of internal coolant flow was carried out, which showed that the two-pass scheme is more effective because it provides lower pressure in the upper header, despite the fact that its hydraulic resistance at fixed flow rate of steam-air mixture is considerably higher than at using the one-pass schema. This result is a consequence of the fact that, in the two-pass scheme, the condensation process involves the larger internal surface of tubes, results in lower values of Δ t (the temperature difference between internal and external coolant) for a given heat load.
NASA Astrophysics Data System (ADS)
Priti, Gangwar, Reetesh Kumar; Srivastava, Rajesh
2018-04-01
A collisional radiative (C-R) model has been developed to diagnose the rf generated Ar-O2 (0%-5%) mixture plasma at low temperatures. Since in such plasmas the most dominant process is an electron impact excitation process, we considered several electron impact fine structure transitions in an argon atom from its ground as well as excited states. The cross-sections for these transitions have been obtained using the reliable fully relativistic distorted wave theory. Processes which account for the coupling of argon with the oxygen molecules have been further added to the model. We couple our model to the optical spectroscopic measurements reported by Jogi et al. [J. Phys. D: Appl. Phys. 47, 335206 (2014)]. The plasma parameters, viz. the electron density (ne) and the electron temperature (Te) as a function of O2 concentration have been obtained using thirteen intense emission lines out of 3p54p → 3p54s transitions observed in their spectroscopic measurements. It is found that as the content of O2 in Ar increases from 0%-5%, Te increases in the range 0.85-1.7 eV, while the electron density decreases from 2.76 × 1012-2.34 × 1011 cm-3. The Ar-3p54s (1si) fine-structure level populations at our extracted plasma parameters are found to be in very good agreement with those obtained from the measurements. Furthermore, we have estimated the individual contributions coming from the ground state, 1si manifolds and cascade contributions to the population of the radiating Ar-3p54p (2pi) states as a function of a trace amount of O2. Such information is very useful to understand the importance of various processes occurring in the plasma.
Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-11-01
Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.
2018-06-01
An analysis is presented of one of the key concepts of physical chemistry of condensed phases: the theory self-consistency in describing the rates of elementary stages of reversible processes and the equilibrium distribution of components in a reaction mixture. It posits that by equating the rates of forward and backward reactions, we must obtain the same equation for the equilibrium distribution of reaction mixture components, which follows directly from deducing the equation in equilibrium theory. Ideal reaction systems always have this property, since the theory is of a one-particle character. Problems arise in considering interparticle interactions responsible for the nonideal behavior of real systems. The Eyring and Temkin approaches to describing nonideal reaction systems are compared. Conditions for the self-consistency of the theory for mono- and bimolecular processes in different types of interparticle potentials, the degree of deviation from the equilibrium state, allowing for the internal motions of molecules in condensed phases, and the electronic polarization of the reagent environment are considered within the lattice gas model. The inapplicability of the concept of an activated complex coefficient for reaching self-consistency is demonstrated. It is also shown that one-particle approximations for considering intermolecular interactions do not provide a theory of self-consistency for condensed phases. We must at a minimum consider short-range order correlations.
Heat transfer degradation during condensation of non-azeotropic mixtures
NASA Astrophysics Data System (ADS)
Azzolin, M.; Berto, A.; Bortolin, S.; Del, D., Col
2017-11-01
International organizations call for a reduction of the HFCs production and utilizations in the next years. Binary or ternary blends of hydroflourocarbons (HFCs) and hydrofluoroolefins (HFOs) are emerging as possible substitutes for high Global Warming Potential (GWP) fluids currently employed in some refrigeration and air-conditioning applications. In some cases, these mixtures are non-azeotropic and thus, during phase-change at constant pressure, they present a temperature glide that, for some blends, can be higher than 10 K. Such temperature variation during phase change could lead to a better matching between the refrigerant and the water temperature profiles in a condenser, thus reducing the exergy losses associated with the heat transfer process. Nevertheless, the additional mass transfer resistance which occurs during the phase change of zeotropic mixtures leads to a heat transfer degradation. Therefore, the design of a condenser working with a zeotropic mixture poses the problem of how to extend the correlations developed for pure fluids to the case of condensation of mixtures. Experimental data taken are very helpful in the assessment of design procedures. In the present paper, heat transfer coefficients have been measured during condensation of zeotropic mixtures of HFC and HFO fluids. Tests have been carried out in the test rig available at the Two Phase Heat Transfer Lab of University of Padova. During the condensation tests, the heat is subtracted from the mixture by using cold water and the heat transfer coefficient is obtained from the measurement of the heat flux on the water side, the direct measurements of the wall temperature and saturation temperature. Tests have been performed at 40°C mean saturation temperature. The present experimental database is used to assess predictive correlations for condensation of mixtures, providing valuable information on the applicability of available models.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Parameters of Solidifying Mixtures Transporting at Underground Ore Mining
NASA Astrophysics Data System (ADS)
Golik, Vladimir; Dmitrak, Yury
2017-11-01
The article is devoted to the problem of providing mining enterprises with solidifying filling mixtures at underground mining. The results of analytical studies using the data of foreign and domestic practice of solidifying mixtures delivery to stopes are given. On the basis of experimental practice the parameters of transportation of solidifying filling mixtures are given with an increase in their quality due to the effect of vibration in the pipeline. The mechanism of the delivery process and the procedure for determining the parameters of the forced oscillations of the pipeline, the characteristics of the transporting processes, the rigidity of the elastic elements of pipeline section supports and the magnitude of vibrator' driving force are detailed. It is determined that the quality of solidifying filling mixtures can be increased due to the rational use of technical resources during the transportation of mixtures, and as a result the mixtures are characterized by a more even distribution of the aggregate. The algorithm for calculating the parameters of the pipe vibro-transport of solidifying filling mixtures can be in demand in the design of mineral deposits underground mining technology.
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
Guo, Cheng-Long; Cao, Hong-Xia; Pei, Hong-Shan; Guo, Fei-Qiang; Liu, Da-Meng
2015-04-01
A multiphase mixture model was developed for revealing the interaction mechanism between biochemical reactions and transfer processes in the entrapped-cell photobioreactor packed with gel granules containing Rhodopseudomonas palustris CQK 01. The effects of difference operation parameters, including operation temperature, influent medium pH value and porosity of packed bed, on substrate concentration distribution characteristics and photo-hydrogen production performance were investigated. The results showed that the model predictions were in good agreement with the experimental data reported. Moreover, the operation temperature of 30 °C and the influent medium pH value of 7 were the most suitable conditions for photo-hydrogen production by biodegrading substrate. In addition, the lower porosity of packed bed was beneficial to enhance photo-hydrogen production performance owing to the improvement on the amount of substrate transferred into gel granules caused by the increased specific area for substrate transfer in the elemental volume. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dendrimersomes Exhibit Lamellar-to-Sponge Phase Transitions.
Wilner, Samantha E; Xiao, Qi; Graber, Zachary T; Sherman, Samuel E; Percec, Virgil; Baumgart, Tobias
2018-05-15
Lamellar to nonlamellar membrane shape transitions play essential roles in key cellular processes, such as membrane fusion and fission, and occur in response to external stimuli, including drug treatment and heat. A subset of these transitions can be modeled by means of thermally inducible amphiphile assemblies. We previously reported on mixtures of hydrogenated, fluorinated, and hybrid Janus dendrimers (JDs) that self-assemble into complex dendrimersomes (DMSs), including dumbbells, and serve as promising models for understanding the complexity of biological membranes. Here we show, by means of a variety of complementary techniques, that DMSs formed by single JDs or by mixtures of JDs undergo a thermally induced lamellar-to-sponge transition. Consistent with the formation of a three-dimensional bilayer network, we show that DMSs become more permeable to water-soluble fluorophores after transitioning to the sponge phase. These DMSs may be useful not only in modeling isotropic membrane rearrangements of biological systems but also in drug delivery since nonlamellar delivery vehicles can promote endosomal disruption and cargo release.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Predicting the shock compression response of heterogeneous powder mixtures
NASA Astrophysics Data System (ADS)
Fredenburg, D. A.; Thadhani, N. N.
2013-06-01
A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.
Mesoporous metal oxides and processes for preparation thereof
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suib, Steven L.; Poyraz, Altug Suleyman
A process for preparing a mesoporous metal oxide, i.e., transition metal oxide. Lanthanide metal oxide, a post-transition metal oxide and metalloid oxide. The process comprises providing an acidic mixture comprising a metal precursor, an interface modifier, a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to form the mesoporous metal oxide. A mesoporous metal oxide prepared by the above process. A method of controlling nano-sized wall crystallinity and mesoporosity in mesoporous metal oxides. The method comprises providing an acidic mixture comprising a metal precursor, an interface modifier,more » a hydrotropic ion precursor, and a surfactant; and heating the acidic mixture at a temperature and for a period of time sufficient to control nano-sized wall crystallinity and mesoporosity in the mesoporous metal oxides. Mesoporous metal oxides and a method of tuning structural properties of mesoporous metal oxides.« less
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald
2017-01-01
In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587
NASA Astrophysics Data System (ADS)
Wasilewska, Marta
2017-10-01
This paper presents the comparison of skid resistance of wearing course made of SMA (Stone Mastic Asphalt) mixtures which differ in resistance to polishing of coarse aggregate. Dolomite, limestone, granite and trachybasalt were taken for investigation. SMA mixtures have the same nominal size of aggregate (11 mm) and very similar aggregate particle-size distribution in mineral mixtures. Tested SMA11 mixtures were designed according to EN 13108-5 and Polish National Specification WT-2: 2014. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of mixtures under specified conditions simulating polishing processes. Tests were performed on both the specimens made of each coarse aggregate and SMA11 mixtures containing these aggregates. Measuring of friction coefficient μm was conducted before and during polishing process up to 180 0000 passes of polishing head. Comparison of the results showed differences in sensitivity to polishing among particular mixtures which depend on the petrographic properties of rock used to produce aggregate. Limestone and dolomite tend to have a fairly uniform texture with low hardness which makes these rock types susceptible to rapid polishing. This caused lower coefficient of friction for SMA11 mixtures with limestone and dolomite in comparison with other test mixtures. These significant differences were already registered at the beginning of the polishing process. Limestone aggregate had lower value of μm before starting the process than trachybasalt and granite aggregate after its completion. Despite the differences in structure and mineralogical composition between the granite and trachybasalt, slightly different values of the friction coefficient at the end of polishing were obtained. Images of the surface were taken with the optical microscope for better understanding of the phenomena occurring on the surface of specimen. Results may be valuable information when selecting aggregate to asphalt mixtures at the stage of its design and maintenance of existing road pavements.
Adaptive Gaussian mixture models for pre-screening in GPR data
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
A chemical reactor network for oxides of nitrogen emission prediction in gas turbine combustor
NASA Astrophysics Data System (ADS)
Hao, Nguyen Thanh
2014-06-01
This study presents the use of a new chemical reactor network (CRN) model and non-uniform injectors to predict the NOx emission pollutant in gas turbine combustor. The CRN uses information from Computational Fluid Dynamics (CFD) combustion analysis with two injectors of CH4-air mixture. The injectors of CH4-air mixture have different lean equivalence ratio, and they control fuel flow to stabilize combustion and adjust combustor's equivalence ratio. Non-uniform injector is applied to improve the burning process of the turbine combustor. The results of the new CRN for NOx prediction in the gas turbine combustor show very good agreement with the experimental data from Korea Electric Power Research Institute.
Simplex-centroid mixture formulation for optimised composting of kitchen waste.
Abdullah, N; Chin, N L
2010-11-01
Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Narayanan, Vineed; Venkatarathnam, G.
2018-03-01
Nitrogen-hydrocarbon mixtures are widely used as refrigerants in J-T refrigerators operating with mixtures, as well as in natural gas liquefiers. The Peng-Robinson equation of state has traditionally been used to simulate the above cryogenic process. Multi parameter Helmholtz energy equations are now preferred for determining the properties of natural gas. They have, however, been used only to predict vapour-liquid equilibria, and not vapour-liquid-liquid equilibria that can occur in mixtures used in cryogenic mixed refrigerant processes. In this paper the vapour-liquid equilibrium of binary mixtures of nitrogen-methane, nitrogen-ethane, nitrogen-propane, nitrogen-isobutane and three component mixtures of nitrogen-methane-ethane and nitrogen-methane-propane have been studied with the Peng-Robinson and the Helmholtz energy equations of state of NIST REFPROP and compared with experimental data available in the literature.
Thermodynamic model effects on the design and optimization of natural gas plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, S.; Zabaloy, M.; Brignole, E.A.
1999-07-01
The design and optimization of natural gas plants is carried out on the basis of process simulators. The physical property package is generally based on cubic equations of state. By rigorous thermodynamics phase equilibrium conditions, thermodynamic functions, equilibrium phase separations, work and heat are computed. The aim of this work is to analyze the NGL turboexpansion process and identify possible process computations that are more sensitive to model predictions accuracy. Three equations of state, PR, SRK and Peneloux modification, are used to study the effect of property predictions on process calculations and plant optimization. It is shown that turboexpander plantsmore » have moderate sensitivity with respect to phase equilibrium computations, but higher accuracy is required for the prediction of enthalpy and turboexpansion work. The effect of modeling CO{sub 2} solubility is also critical in mixtures with high CO{sub 2} content in the feed.« less
ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes
NASA Astrophysics Data System (ADS)
Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.
2015-10-01
A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.
NASA Technical Reports Server (NTRS)
Wessman, Carol A.; Archer, Steven R.; Asner, Gregory P.; Bateson, C. Ann
2004-01-01
Replacement of grasslands and savannas by shrublands and woodlands has been widely reported in tropical, temperate and high-latitude rangelands worldwide (Archer 1994). These changes in vegetation structure may reflect historical shifts in climate and land use; and are likely to influence biodiversity, productivity, above- and below ground carbon and nitrogen sequestration and biophysical aspects of land surface-atmosphere interactions. The goal of our proposed research is to investigate how changes in the relative abundance of herbaceous and woody vegetation affect carbon and nitrogen dynamics across heterogeneous savannas and shrub/woodlands. By linking actual land-cover composition (derived through spectral mixture analysis of AVIRIS, TM, and AVHRR imagery) with a process-based ecosystem model, we will generate explicit predictions of the C and N storage in plants and soils resulting from changes in vegetation structure. Our specific objectives will be to (1) continue development and test applications of spectral mixture analysis across grassland-to-woodland transitions; (2) quantify temporal changes in plant and soil C and N storage and turnover for remote sensing and process model parameterization and verification; and (3) couple landscape fraction maps to an ecosystem simulation model to observe biogeochemical dynamics under changing landscape structure and climatological forcings.
Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira
2014-01-13
An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.