ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
NASA Astrophysics Data System (ADS)
Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng
2016-11-01
The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Introduction to the special section on mixture modeling in personality assessment.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.
Archambeau, Cédric; Verleysen, Michel
2007-01-01
A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
Latent Transition Analysis with a Mixture Item Response Theory Measurement Model
ERIC Educational Resources Information Center
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian
2010-01-01
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications
ERIC Educational Resources Information Center
Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.
2007-01-01
Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…
ERIC Educational Resources Information Center
Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo
2012-01-01
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…
Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Mollenhauer, Robert; Brewer, Shannon K.
2017-01-01
Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
ERIC Educational Resources Information Center
Bauer, Daniel J.; Curran, Patrick J.
2004-01-01
Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…
NASA Astrophysics Data System (ADS)
Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling
2012-10-01
This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Spurious Latent Classes in the Mixture Rasch Model
ERIC Educational Resources Information Center
Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.
2011-01-01
Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…
heterogeneous mixture distributions for multi-source extreme rainfall
NASA Astrophysics Data System (ADS)
Ouarda, T.; Shin, J.; Lee, T. S.
2013-12-01
Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.
2012-01-01
Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
An introduction to mixture item response theory models.
De Ayala, R J; Santiago, S Y
2017-02-01
Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.
2011-01-01
Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
Sparse covariance estimation in heterogeneous samples*
Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian
2015-01-01
Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189
NASA Astrophysics Data System (ADS)
Shang, De-Yi; Zhong, Liang-Cai
2017-01-01
Our novel models for fluid's variable physical properties are improved and reported systematically in this work for enhancement of theoretical and practical value on study of convection heat and mass transfer. It consists of three models, namely (1) temperature parameter model, (2) polynomial model, and (3) weighted-sum model, respectively for treatment of temperature-dependent physical properties of gases, temperature-dependent physical properties of liquids, and concentration- and temperature-dependent physical properties of vapour-gas mixture. Two related components are proposed, and involved in each model for fluid's variable physical properties. They are basic physic property equations and theoretical similarity equations on physical property factors. The former, as the foundation of the latter, is based on the typical experimental data and physical analysis. The latter is built up by similarity analysis and mathematical derivation based on the former basic physical properties equations. These models are available for smooth simulation and treatment of fluid's variable physical properties for assurance of theoretical and practical value of study on convection of heat and mass transfer. Especially, so far, there has been lack of available study on heat and mass transfer of film condensation convection of vapour-gas mixture, and the wrong heat transfer results existed in widespread studies on the related research topics, due to ignorance of proper consideration of the concentration- and temperature-dependent physical properties of vapour-gas mixture. For resolving such difficult issues, the present novel physical property models have their special advantages.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Internal structure of shock waves in disparate mass mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
The detailed flow structure of a normal shock wave for a gas mixture is investigated using the direct-simulation Monte Carlo method. A variable diameter hard-sphere (VDHS) model is employed to investigate the effect of different viscosity temperature exponents (VTE) for each species in a gas mixture. Special attention is paid to the irregular behavior in the density profiles which was previously observed in a helium-xenon experiment. It is shown that the VTE can have substantial effects in the prediction of the structure of shock waves. The variable hard-sphere model of Bird shows good agreement, but with some limitations, with the experimental data if a common VTE is chosen properly for each case. The VDHS model shows better agreement with the experimental data without adjusting the VTE. The irregular behavior of the light-gas component in shock waves of disparate mass mixtures is observed not only in the density profile, but also in the parallel temperature profile. The strength of the shock wave, the type of molecular interactions, and the mole fraction of heavy species have substantial effects on the existence and structure of the irregularities.
Ahmadzadeh, S Mohammad Hassan
2014-01-01
Mixtures of silicone elastomer and silicone oil were prepared and the values of their Young’s moduli, E, determined in compression. The mixtures had volume fractions, ϕ, of silicone oil in the range of 0–0.73. Measurements were made, under displacement control, for strain rates, ε·, in the range of 0.04–3.85 s−1. The behaviour of E as a function of ϕ and ε· was investigated using a response surface model. The effects of the two variables were independent for the silicones used in this investigation. As a result, the dependence of E values (measured in MPa) on ϕ and ε· (s−1) could be represented by E=0.57−0.75ϕ+0.01loge(ε·). This means that these silicones can be mixed to give materials with E values in the range of about 0.02–0.57 MPa, which includes E values for many biological tissues. Thus, the mixtures can be used for making models for training health-care professionals and may be useful in some research applications as model tissues that do not exhibit biological variability. PMID:24951628
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
ERIC Educational Resources Information Center
Aryadoust, Vahid
2015-01-01
The present study uses a mixture Rasch model to examine latent differential item functioning in English as a foreign language listening tests. Participants (n = 250) took a listening and lexico-grammatical test and completed the metacognitive awareness listening questionnaire comprising problem solving (PS), planning and evaluation (PE), mental…
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
Variable mixture ratio performance through nitrogen augmentation
NASA Technical Reports Server (NTRS)
Beichel, R.; Obrien, C. J.; Bair, E. K.
1988-01-01
High/variable mixture ratio O2/H2 candidate engine cycles are examined for earth-to-orbit vehicle application. Engine performance and power balance information are presented for the candidate cycles relative to chamber pressure, bulk density, and mixture ratio. Included in the cycle screening are concepts where a third fluid (liquid nitrogen) is used to achieve a variable mixture ratio over the trajectory from liftoff to earth orbit. The third fluid cycles offer a very low risk, fully reusable, low operation cost alternative to high/variable mixture ratio bipropellant cycles. Variable mixture ratio engines with extendible nozzle are slightly lower performing than a single mixture ratio engine (MR = 7:1) with extendible nozzle. Dual expander engines (MR = 7:1) have slightly better performance than the single mixture ratio engine. Dual fuel dual expander engines offer a 16 percent improvement over the single mixture ratio engine.
Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin
2017-01-01
Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
Gordillo, Belén; Rodríguez-Pulido, Francisco J; González-Miret, M Lourdes; Quijada-Morín, Natalia; Rivas-Gonzalo, Julián C; García-Estévez, Ignacio; Heredia, Francisco J; Escribano-Bailón, M Teresa
2015-09-09
The combined effect of anthocyanin-flavanol-flavonol ternary interactions on the colorimetric and chemical stability of malvidin-3-glucoside has been studied. Model solutions with fixed malvidin-3-glucoside/(+)-catechin ratio (MC) and variable quercetin-3-β-d-glucoside concentration (MC+Q) and solutions with fixed malvidin-3-glucoside/quercetin-3-β-d-glucoside ratio (MQ) and variable (+)-catechin concentration (MQ+C) were tested at levels closer to those existing in wines. Color variations during storage were evaluated by differential colorimetry. Changes in the anthocyanin concentration were monitored by HPLC-DAD. CIELAB color-difference formulas were demonstrated to be of practical interest to assess the stronger and more stable interaction of quercetin-3-β-d-glucoside with MC binary mixture than (+)-catechin with MQ mixture. The results imply that MC+Q ternary solutions kept their intensity and bluish tonalities for a longer time in comparison to MQ+C solutions. The stability of malvidin-3-glucoside improves when the concentration of quercetin-3-β-d-glucoside increases in MC+Q mixtures, whereas the addition of (+)-catechin in MQ+C mixtures resulted in an opposite effect.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
In vitro screening for population variability in toxicity of pesticide-containing mixtures
Abdo, Nour; Wetmore, Barbara A.; Chappell, Grace A.; Shea, Damian; Wright, Fred A.; Rusyna, Ivan
2016-01-01
Population-based human in vitro models offer exceptional opportunities for evaluating the potential hazard and mode of action of chemicals, as well as variability in responses to toxic insults among individuals. This study was designed to test the hypothesis that comparative population genomics with efficient in vitro experimental design can be used for evaluation of the potential for hazard, mode of action, and the extent of population variability in responses to chemical mixtures. We selected 146 lymphoblast cell lines from 4 ancestrally and geographically diverse human populations based on the availability of genome sequence and basal RNA-seq data. Cells were exposed to two pesticide mixtures – an environmental surface water sample comprised primarily of organochlorine pesticides and a laboratory-prepared mixture of 36 currently used pesticides – in concentration response and evaluated for cytotoxicity. On average, the two mixtures exhibited a similar range of in vitro cytotoxicity and showed considerable inter-individual variability across screened cell lines. However, when in vitroto-in vivo extrapolation (IVIVE) coupled with reverse dosimetry was employed to convert the in vitro cytotoxic concentrations to oral equivalent doses and compared to the upper bound of predicted human exposure, we found that a nominally more cytotoxic chlorinated pesticide mixture is expected to have greater margin of safety (more than 5 orders of magnitude) as compared to the current use pesticide mixture (less than 2 orders of magnitude) due primarily to differences in exposure predictions. Multivariate genome-wide association mapping revealed an association between the toxicity of current use pesticide mixture and a polymorphism in rs1947825 in C17orf54. We conclude that a combination of in vitro human population-based cytotoxicity screening followed by dosimetric adjustment and comparative population genomics analyses enables quantitative evaluation of human health hazard from complex environmental mixtures. Additionally, such an approach yields testable hypotheses regarding potential toxicity mechanisms. PMID:26386728
NASA Astrophysics Data System (ADS)
Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.
2017-12-01
Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-01-01
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990
Three tests and three corrections: Comment on Koen and Yonelinas (2010)
Jang, Yoonhee; Mickes, Laura; Wixted, John T.
2012-01-01
The slope of the z-transformed receiver-operating characteristic (zROC) in recognition memory experiments is usually less than 1, which has long been interpreted to mean that the variance of the target distribution is greater than the variance of the lure distribution. The greater variance of the target distribution could arise because the different items on a list receive different increments in memory strength during study (the “encoding variability” hypothesis). In a test of that interpretation, J. Koen and A. Yonelinas (2010, K&Y) attempted to further increase encoding variability to see if it would further decrease the slope of the zROC. To do so, they presented items on a list for two different durations and then mixed the weak and strong targets together. After performing three tests on the mixed-strength data, K&Y concluded that encoding variability does not explain why the slope of the zROC is typically less than one. However, we show that their tests have no bearing on the encoding variability account. Instead, they bear on the mixture-UVSD model that corresponds to their experimental design. On the surface, the results reported by K&Y appear to be inconsistent with the predictions of the mixture-UVSD model (though they were taken to be inconsistent with the predictions of the encoding variability hypothesis). However, all three of the tests they performed contained errors. When those errors are corrected, the same three tests show that their data support, rather than contradict, the mixture-UVSD model (but they still have no bearing on the encoding variability hypothesis). PMID:22390323
NASA Astrophysics Data System (ADS)
Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.
2016-08-01
Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2016-02-01
Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer.
Padgett, Mark C; Tick, Geoffrey R; Carroll, Kenneth C; Burke, William R
2017-03-01
The influence of chemical structure on NAPL mixture nonideality evolution, rate-limited dissolution, and contaminant mass flux was examined. The variability of measured and UNIFAC modeled NAPL activity coefficients as a function of mole fraction was compared for two NAPL mixtures containing structurally-different contaminants of concern including toluene (TOL) or trichloroethene (TCE) within a hexadecane (HEXDEC) matrix. The results showed that dissolution from the NAPL mixtures transitioned from ideality for mole fractions >0.05 to nonideality as mole fractions decreased. In particular, the TCE generally exhibited more ideal dissolution behavior except at lower mole fractions, and may indicate greater structural/polarity similarity between the two compounds. Raoult's Law and UNIFAC generally under-predicted the batch experiment results for TOL:HEXDEC mixtures especially for mole fractions ≤0.05. The dissolution rate coefficients were similar for both TOL and TCE over all mole fractions tested. Mass flux reduction (MFR) analysis showed that more efficient removal behavior occurred for TOL and TCE with larger mole fractions compared to the lower initial mole fraction mixtures (i.e. <0.2). However, compared to TOL, TCE generally exhibited more efficient removal behavior over all mole fractions tested and may have been the result of structural and molecular property differences between the compounds. Activity coefficient variability as a function of mole fraction was quantified through regression analysis and incorporated into dissolution modeling analyses for the dynamic flushing experiments. TOL elution concentrations were modeled (predicted) reasonable well using ideal and equilibrium assumptions, but the TCE elution concentrations could not be predicted using the ideal model. Rather, the dissolution modeling demonstrated that TCE elution was better described by the nonideal model whereby NAPL-phase activity coefficient varied as a function of COC mole fraction. For dynamic column flushing experiments, dissolution rate kinetics can vary significantly with changes in NAPL volume and surface area. However, under conditions whereby NAPL volume and area are not significantly altered during dissolution, mixture nonideality effects may have a greater relative control on dissolution (elution) and MFR behavior compared to kinetic rate limitations. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Padgett, Mark C.; Tick, Geoffrey R.; Carroll, Kenneth C.; Burke, William R.
2017-03-01
The influence of chemical structure on NAPL mixture nonideality evolution, rate-limited dissolution, and contaminant mass flux was examined. The variability of measured and UNIFAC modeled NAPL activity coefficients as a function of mole fraction was compared for two NAPL mixtures containing structurally-different contaminants of concern including toluene (TOL) or trichloroethene (TCE) within a hexadecane (HEXDEC) matrix. The results showed that dissolution from the NAPL mixtures transitioned from ideality for mole fractions > 0.05 to nonideality as mole fractions decreased. In particular, the TCE generally exhibited more ideal dissolution behavior except at lower mole fractions, and may indicate greater structural/polarity similarity between the two compounds. Raoult's Law and UNIFAC generally under-predicted the batch experiment results for TOL:HEXDEC mixtures especially for mole fractions ≤ 0.05. The dissolution rate coefficients were similar for both TOL and TCE over all mole fractions tested. Mass flux reduction (MFR) analysis showed that more efficient removal behavior occurred for TOL and TCE with larger mole fractions compared to the lower initial mole fraction mixtures (i.e. < 0.2). However, compared to TOL, TCE generally exhibited more efficient removal behavior over all mole fractions tested and may have been the result of structural and molecular property differences between the compounds. Activity coefficient variability as a function of mole fraction was quantified through regression analysis and incorporated into dissolution modeling analyses for the dynamic flushing experiments. TOL elution concentrations were modeled (predicted) reasonable well using ideal and equilibrium assumptions, but the TCE elution concentrations could not be predicted using the ideal model. Rather, the dissolution modeling demonstrated that TCE elution was better described by the nonideal model whereby NAPL-phase activity coefficient varied as a function of COC mole fraction. For dynamic column flushing experiments, dissolution rate kinetics can vary significantly with changes in NAPL volume and surface area. However, under conditions whereby NAPL volume and area are not significantly altered during dissolution, mixture nonideality effects may have a greater relative control on dissolution (elution) and MFR behavior compared to kinetic rate limitations.
NASA Technical Reports Server (NTRS)
Liu, T.-M.; Davy, W. C.
1974-01-01
The nonequilibrium axisymmetric stagnation point boundary layer over an ablating graphite surface is considered. The external stream is a high temperature mixture of hydrogen and helium. Variable thermodynamic and transport properties are assumed. Lennard-Jones potential model is used to calculate the transport coefficients of each species. Although the mixture rules for viscosity of the gas mixture are used, the weighting functions are more sophisticated than those commonly employed. For the conductivity of the mixture, generalized Wassiljewa coefficients are used. Seven species with 28 dissociation/recombination reactions are considered. Hansen's model for the dissociation rate constants is employed. The recombination rate constants are obtained by invoking detailed balance principles assisted by the JANAF thermodynamic data and the Hansen-Pearson thermodynamic data for C3.
Hallquist, Michael N; Wright, Aidan G C
2014-01-01
Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Finite-deformation phase-field chemomechanics for multiphase, multicomponent solids
NASA Astrophysics Data System (ADS)
Svendsen, Bob; Shanthraj, Pratheek; Raabe, Dierk
2018-03-01
The purpose of this work is the development of a framework for the formulation of geometrically non-linear inelastic chemomechanical models for a mixture of multiple chemical components diffusing among multiple transforming solid phases. The focus here is on general model formulation. No specific model or application is pursued in this work. To this end, basic balance and constitutive relations from non-equilibrium thermodynamics and continuum mixture theory are combined with a phase-field-based description of multicomponent solid phases and their interfaces. Solid phase modeling is based in particular on a chemomechanical free energy and stress relaxation via the evolution of phase-specific concentration fields, order-parameter fields (e.g., related to chemical ordering, structural ordering, or defects), and local internal variables. At the mixture level, differences or contrasts in phase composition and phase local deformation in phase interface regions are treated as mixture internal variables. In this context, various phase interface models are considered. In the equilibrium limit, phase contrasts in composition and local deformation in the phase interface region are determined via bulk energy minimization. On the chemical side, the equilibrium limit of the current model formulation reduces to a multicomponent, multiphase, generalization of existing two-phase binary alloy interface equilibrium conditions (e.g., KKS). On the mechanical side, the equilibrium limit of one interface model considered represents a multiphase generalization of Reuss-Sachs conditions from mechanical homogenization theory. Analogously, other interface models considered represent generalizations of interface equilibrium conditions consistent with laminate and sharp-interface theory. In the last part of the work, selected existing models are formulated within the current framework as special cases and discussed in detail.
ERIC Educational Resources Information Center
Connell, Arin M.; Dishion, Thomas J.; Deater-Deckard, Kirby
2006-01-01
This 4-year study of 698 young adolescents examined the covariates of early onset substance use from Grade 6 through Grade 9. The youth were randomly assigned to a family-centered Adolescent Transitions Program (ATP) condition. Variable-centered (zero-inflated Poisson growth model) and person-centered (latent growth mixture model) approaches were…
Suppressor Variables and Multilevel Mixture Modelling
ERIC Educational Resources Information Center
Darmawan, I Gusti Ngurah; Keeves, John P.
2006-01-01
A major issue in educational research involves taking into consideration the multilevel nature of the data. Since the late 1980s, attempts have been made to model social science data that conform to a nested structure. Among other models, two-level structural equation modelling or two-level path modelling and hierarchical linear modelling are two…
Identification of degenerate neuronal systems based on intersubject variability.
Noppeney, Uta; Penny, Will D; Price, Cathy J; Flandin, Guillaume; Friston, Karl J
2006-04-15
Group studies implicitly assume that all subjects activate one common system to sustain a particular cognitive task. Intersubject variability is generally treated as well-behaved and uninteresting noise. However, intersubject variability might result from subjects engaging different degenerate neuronal systems that are each sufficient for task performance. This would produce a multimodal distribution of intersubject variability. We have explored this idea with the help of Gaussian Mixture Modeling and Bayesian model comparison procedures. We illustrate our approach using a crossmodal priming paradigm, in which subjects perform a semantic decision on environmental sounds or their spoken names that were preceded by a semantically congruent or incongruent picture or written name. All subjects consistently activated the superior temporal gyri bilaterally, the left fusiform gyrus and the inferior frontal sulcus. Comparing a One and Two Gaussian Mixture Model of the unexplained residuals provided very strong evidence for two groups with distinct activation patterns: 6 subjects exhibited additional activations in the superior temporal sulci bilaterally, the right superior frontal and central sulcus. 11 subjects showed increased activation in the striate and the right inferior parietal cortex. These results suggest that semantic decisions on auditory-visual compound stimuli might be accomplished by two overlapping degenerate neuronal systems.
Spatio-temporal Bayesian model selection for disease mapping
Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K
2016-01-01
Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.
In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2016-02-05
Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer. Copyright © 2015 Elsevier B.V. All rights reserved.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Influence of apple pomace inclusion on the process of animal feed pelleting.
Maslovarić, Marijana D; Vukmirović, Đuro; Pezo, Lato; Čolović, Radmilo; Jovanović, Rade; Spasevski, Nedeljka; Tolimir, Nataša
2017-08-01
Apple pomace (AP) is the main by-product of apple juice production. Large amounts of this material disposed into landfills can cause serious environmental problems. One of the solutions is to utilise AP as animal feed. The aim of this study was to investigate the impact of dried AP inclusion into model mixtures made from conventional feedstuffs on pellet quality and pellet press performance. Three model mixtures, with different ratios of maize, sunflower meal and AP, were pelleted. Response surface methodology (RSM) was applied when designing the experiment. The simultaneous and interactive effects of apple pomace share (APS) in the mixtures, die thickness (DT) of the pellet press and initial moisture content of the mixtures (M), on pellet quality and production parameters were investigated. Principal component analysis (PCA) and standard score (SS) analysis were applied for comprehensive analysis of the experimental data. The increase in APS led to an improvement of pellet quality parameters: pellet durability index (PDI), hardness (H) and proportion of fines in pellets. The increase in DT and M resulted in pellet quality improvement. The increase in DT and APS resulted in higher energy consumption of the pellet press. APS was the most influential variable for PDI and H calculation, while APS and DT were the most influential variables in the calculation of pellet press energy consumption. PCA showed that the first two principal components could be considered sufficient for data representation. In conclusion, addition of dried AP to feed model mixtures significantly improved the quality of the pellets.
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Goris, Robbe L.T.; Simoncelli, Eero P.; Movshon, J. Anthony
2016-01-01
SUMMARY Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells’ diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. PMID:26549331
Prediction of pesticide toxicity in Midwest streams
Shoda, Megan E.; Stone, Wesley W.; Nowell, Lisa H.
2016-01-01
The occurrence of pesticide mixtures is common in stream waters of the United States, and the impact of multiple compounds on aquatic organisms is not well understood. Watershed Regressions for Pesticides (WARP) models were developed to predict Pesticide Toxicity Index (PTI) values in unmonitored streams in the Midwest and are referred to as WARP-PTI models. The PTI is a tool for assessing the relative toxicity of pesticide mixtures to fish, benthic invertebrates, and cladocera in stream water. One hundred stream sites in the Midwest were sampled weekly in May through August 2013, and the highest calculated PTI for each site was used as the WARP-PTI model response variable. Watershed characteristics that represent pesticide sources and transport were used as the WARP-PTI model explanatory variables. Three WARP-PTI models—fish, benthic invertebrates, and cladocera—were developed that include watershed characteristics describing toxicity-weighted agricultural use intensity, land use, agricultural management practices, soil properties, precipitation, and hydrologic properties. The models explained between 41 and 48% of the variability in the measured PTI values. WARP-PTI model evaluation with independent data showed reasonable performance with no clear bias. The models were applied to streams in the Midwest to demonstrate extrapolation for a regional assessment to indicate vulnerable streams and to guide more intensive monitoring.
Bromaghin, Jeffrey F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.
2011-01-01
Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length‐adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large‐mesh gill‐net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size‐selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock‐specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error. The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length‐adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted to study any observable trait or condition that may vary among populations.
Lagrange thermodynamic potential and intrinsic variables for He-3 He-4 dilute solutions
NASA Technical Reports Server (NTRS)
Jackson, H. W.
1983-01-01
For a two-fluid model of dilute solutions of He-3 in liquid He-4, a thermodynamic potential is constructed that provides a Lagrangian for deriving equations of motion by a variational procedure. This Lagrangian is defined for uniform velocity fields as a (negative) Legendre transform of total internal energy, and its primary independent variables, together with their thermodynamic conjugates, are identified. Here, similarities between relations in classical physics and quantum statistical mechanics serve as a guide for developing an alternate expression for this function that reveals its character as the difference between apparent kinetic energy and intrinsic internal energy. When the He-3 concentration in the mixtures tends to zero, this expression reduces to Zilsel's formula for the Lagrangian for pure liquid He-4. An investigation of properties of the intrinsic internal energy leads to the introduction of intrinsic chemical potentials along with other intrinsic variables for the mixtures. Explicit formulas for these variables are derived for a noninteracting elementary excitation model of the fluid. Using these formulas and others also derived from quantum statistical mechanics, another equivalent expression for the Lagrangian is generated.
Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry
NASA Astrophysics Data System (ADS)
Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît
2013-01-01
This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics
NASA Astrophysics Data System (ADS)
Zaharieva, Roussislava; Hanagud, Sathya
2009-06-01
Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.
Pressure and Chemical Potential: Effects Hydrophilic Soils Have on Adsorption and Transport
NASA Astrophysics Data System (ADS)
Bennethum, L. S.; Weinstein, T.
2003-12-01
Using the assumption that thermodynamic properties of fluid is affected by its proximity to the solid phase, a theoretical model has been developed based on upscaling and fundamental thermodynamic principles (termed Hybrid Mixture Theory). The theory indicates that Darcy's law and the Darcy-scale chemical potential (which determines the rate of adsorption and diffusion) need to be modified in order to apply to soils containing hydrophilic soils. In this talk we examine the Darcy-scale definition of pressure and chemical potential, especially as it applies to hydrophilic soils. To arrive at our model, we used hybrid mixture theory - first pioneered by Hassanizadeh and Gray in 1979. The technique involves averaging the field equations (i.e. conservation of mass, momentum balance, energy balance, etc.) to obtain macroscopic field equations, where each field variable is defined precisely in terms of its microscale counterpart. To close the system consistently with classical thermodynamics, the entropy inequality is exploited in the sense of Coleman and Noll. With the exceptions that the macroscale field variables are defined precisely in terms of their microscale counterparts and that microscopic interfacial equations can also be treated in a similar manner, the resulting system of equations is consistent with those derived using classical mixture theory. Hence the terminology, Hybrid Mixture Theory.
One-dimensional pore pressure diffusion of different grain-fluid mixtures
NASA Astrophysics Data System (ADS)
von der Thannen, Magdalena; Kaitna, Roland
2015-04-01
During the release and the flow of fully saturated debris, non-hydrostatic fluid pressure can build up and probably dissipate during the event. This excess fluid pressure has a strong influence on the flow and deposition behaviour of debris flows. Therefore, we investigate the influence of mixture composition on the dissipation of non-hydrostatic fluid pressures. For this we use a cylindrical pipe of acrylic glass with installed pore water pressure sensors in different heights and measure the evolution of the pore water pressure over time. Several mixtures with variable content of fine sediment (silt and clay) and variable content of coarse sediment (with fixed relative fractions of grains between 2 and 32 mm) are tested. For the fines two types of clay (smectite and kaolinite) and loam (Stoober Lehm) are used. The analysis is based on the one-dimensional consolidation theory which uses a diffusion coefficient D to model the decay of excess fluid pressure over time. Starting from artificially induced super-hydrostatic fluid pressures, we find dissipation coefficients ranging from 10-5 m²/s for liquid mixtures to 10-8 m²/s for viscous mixtures. The results for kaolinite and smectite are quite similar. For our limited number of mixtures the effect of fines content is more pronounced than the effect of different amounts of coarse particles.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
NASA Astrophysics Data System (ADS)
Naguib, Ibrahim A.; Darwish, Hany W.
2012-02-01
A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.
Modeling Working Memory Tasks on the Item Level
ERIC Educational Resources Information Center
Luo, Dasen; Chen, Guopeng; Zen, Fanlin; Murray, Bronwyn
2010-01-01
Item responses to Digit Span and Letter-Number Sequencing were analyzed to develop a better-refined model of the two working memory tasks using the finite mixture (FM) modeling method. Models with ordinal latent traits were found to better account for the independent sources of the variability in the tasks than those with continuous traits, and…
CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS
McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.
2014-01-01
The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
Dudásová, Dorota; Rune Flåten, Geir; Sjöblom, Johan; Øye, Gisle
2009-09-15
The transmission profiles of one- to three-component particle suspension mixtures were analyzed by multivariate methods such as principal component analysis (PCA) and partial least-squares regression (PLS). The particles mimic the solids present in oil-field-produced water. Kaolin and silica represent solids of reservoir origin, whereas FeS is the product of bacterial metabolic activities, and Fe(3)O(4) corrosion product (e.g., from pipelines). All particles were coated with crude oil surface active components to imitate particles in real systems. The effects of different variables (concentration, temperature, and coating) on the suspension stability were studied with Turbiscan LAb(Expert). The transmission profiles over 75 min represent the overall water quality, while the transmission during the first 15.5 min gives information for suspension behavior during a representative time period for the hold time in the separator. The behavior of the mixed particle suspensions was compared to that of the single particle suspensions and models describing the systems were built. The findings are summarized as follows: silica seems to dominate the mixture properties in the binary suspensions toward enhanced separation. For 75 min, temperature and concentration are the most significant, while for 15.5 min, concentration is the only significant variable. Models for prediction of transmission spectra from run parameters as well as particle type from transmission profiles (inverse calibration) give a reasonable description of the relationships. In ternary particle mixtures, silica is not dominant and for 75 min, the significant variables for mixture (temperature and coating) are more similar to single kaolin and FeS/Fe(3)O(4). On the other hand, for 15.5 min, the coating is the most significant and this is similar to one for silica (at 15.5 min). The model for prediction of transmission spectra from run parameters gives good estimates of the transmission profiles. Although the model for prediction of particle type from transmission parameters is able to predict some particles, further improvement is required before all particles are consistently correctly classified. Cross-validation was done for both models and estimation errors are reported.
ERIC Educational Resources Information Center
Lee, HwaYoung; Beretvas, S. Natasha
2014-01-01
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Origin and Function of Tuning Diversity in Macaque Visual Cortex.
Goris, Robbe L T; Simoncelli, Eero P; Movshon, J Anthony
2015-11-18
Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells' diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. Copyright © 2015 Elsevier Inc. All rights reserved.
TØ, Bechshøft; Sonne, C; Dietz, R; Born, EW; Muir, DCG; Letcher, RJ; Novak, MA; Henchey, E; Meyer, JS; Jenssen, BM; Villanger, GD
2012-01-01
The multivariate relationship between hair cortisol, whole blood thyroid hormones, and the complex mixtures of organohalogen contaminant (OHC) levels measured in subcutaneous adipose of 23 East Greenland polar bears (eight males and 15 females, all sampled between the years 1999 and 2001) was analyzed using projection to latent structure (PLS) regression modeling. In the resulting PLS model, most important variables with a negative influence on cortisol levels were particularly BDE-99, but also CB-180, -201, BDE-153, and CB-170/190. The most important variables with a positive influence on cortisol were CB-66/95, α-HCH, TT3, as well as heptachlor epoxide, dieldrin, BDE-47, p,p′-DDD. Although statistical modeling does not necessarily fully explain biological cause-effect relationships, relationships indicate that (1) the hypothalamic-pituitary-adrenal (HPA) axis in East Greenland polar bears is likely to be affected by OHC-contaminants and (2) the association between OHCs and cortisol may be linked with the hypothalamus-pituitary-thyroid (HPT) axis. PMID:22575327
NASA Astrophysics Data System (ADS)
Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.
2012-12-01
Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Computational Modeling of Seismic Wave Propagation Velocity-Saturation Effects in Porous Rocks
NASA Astrophysics Data System (ADS)
Deeks, J.; Lumley, D. E.
2011-12-01
Compressional and shear velocities of seismic waves propagating in porous rocks vary as a function of the fluid mixture and its distribution in pore space. Although it has been possible to place theoretical upper and lower bounds on the velocity variation with fluid saturation, predicting the actual velocity response of a given rock with fluid type and saturation remains an unsolved problem. In particular, we are interested in predicting the velocity-saturation response to various mixtures of fluids with pressure and temperature, as a function of the spatial distribution of the fluid mixture and the seismic wavelength. This effect is often termed "patchy saturation' in the rock physics community. The ability to accurately predict seismic velocities for various fluid mixtures and spatial distributions in the pore space of a rock is useful for fluid detection, hydrocarbon exploration and recovery, CO2 sequestration and monitoring of many subsurface fluid-flow processes. We create digital rock models with various fluid mixtures, saturations and spatial distributions. We use finite difference modeling to propagate elastic waves of varying frequency content through these digital rock and fluid models to simulate a given lab or field experiment. The resulting waveforms can be analyzed to determine seismic traveltimes, velocities, amplitudes, attenuation and other wave phenomena for variable rock models of fluid saturation and spatial fluid distribution, and variable wavefield spectral content. We show that we can reproduce most of the published effects of velocity-saturation variation, including validating the Voigt and Reuss theoretical bounds, as well as the Hill "patchy saturation" curve. We also reproduce what has been previously identified as Biot dispersion, but in fact in our models is often seen to be wave multi-pathing and broadband spectral effects. Furthermore, we find that in addition to the dominant seismic wavelength and average fluid patch size, the smoothness of the fluid patches are a critical factor in determining the velocity-saturation response; this is a result that we have not seen discussed in the literature. Most importantly, we can reproduce all of these effects using full elastic wavefield scattering, without the need to resort to more complicated squirt-flow or poroelastic models. This is important because the physical properties and parameters we need to model full elastic wave scattering, and predict a velocity-saturation curve, are often readily available for projects we undertake; this is not the case for poroelastic or squirt-flow models. We can predict this velocity saturation curve for a specific rock type, fluid mixture distribution and wavefield spectrum.
Vera, José Fernando; de Rooij, Mark; Heiser, Willem J
2014-11-01
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.
Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution
ERIC Educational Resources Information Center
Verkuilen, Jay; Smithson, Michael
2012-01-01
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
ERIC Educational Resources Information Center
Pek, Jolynn; Chalmers, R. Philip; Kok, Bethany E.; Losardo, Diane
2015-01-01
Structural equation mixture models (SEMMs), when applied as a semiparametric model (SPM), can adequately recover potentially nonlinear latent relationships without their specification. This SPM is useful for exploratory analysis when the form of the latent regression is unknown. The purpose of this article is to help users familiar with structural…
The Cusp Catastrophe Model as Cross-Sectional and Longitudinal Mixture Structural Equation Models
Chow, Sy-Miin; Witkiewitz, Katie; Grasman, Raoul P. P. P.; Maisto, Stephen A.
2015-01-01
Catastrophe theory (Thom, 1972, 1993) is the study of the many ways in which continuous changes in a system’s parameters can result in discontinuous changes in one or several outcome variables of interest. Catastrophe theory–inspired models have been used to represent a variety of change phenomena in the realm of social and behavioral sciences. Despite their promise, widespread applications of catastrophe models have been impeded, in part, by difficulties in performing model fitting and model comparison procedures. We propose a new modeling framework for testing one kind of catastrophe model — the cusp catastrophe model — as a mixture structural equation model (MSEM) when cross-sectional data are available; or alternatively, as an MSEM with regime-switching (MSEM-RS) when longitudinal panel data are available. The proposed models and the advantages offered by this alternative modeling framework are illustrated using two empirical examples and a simulation study. PMID:25822209
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
Bayesian Variable Selection for Hierarchical Gene-Environment and Gene-Gene Interactions
Liu, Changlu; Ma, Jianzhong; Amos, Christopher I.
2014-01-01
We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions and gene by environment interactions in the same model. Our approach incorporates the natural hierarchical structure between the main effects and interaction effects into a mixture model, such that our methods tend to remove the irrelevant interaction effects more effectively, resulting in more robust and parsimonious models. We consider both strong and weak hierarchical models. For a strong hierarchical model, both of the main effects between interacting factors must be present for the interactions to be considered in the model development, while for a weak hierarchical model, only one of the two main effects is required to be present for the interaction to be evaluated. Our simulation results show that the proposed strong and weak hierarchical mixture models work well in controlling false positive rates and provide a powerful approach for identifying the predisposing effects and interactions in gene-environment interaction studies, in comparison with the naive model that does not impose this hierarchical constraint in most of the scenarios simulated. We illustrated our approach using data for lung cancer and cutaneous melanoma. PMID:25154630
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Two-Part and Related Regression Models for Longitudinal Data
Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.
2017-01-01
Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906
Managing Potato Biodiversity to Cope with Frost Risk in the High Andes: A Modeling Perspective
Condori, Bruno; Hijmans, Robert J.; Ledent, Jean Francois; Quiroz, Roberto
2014-01-01
Austral summer frosts in the Andean highlands are ubiquitous throughout the crop cycle, causing yield losses. In spite of the existing warming trend, climate change models forecast high variability, including freezing temperatures. As the potato center of origin, the region has a rich biodiversity which includes a set of frost resistant genotypes. Four contrasting potato genotypes –representing genetic variability- were considered in the present study: two species of frost resistant native potatoes (the bitter Solanum juzepczukii, var. Luki, and the non-bitter Solanum ajanhuiri, var. Ajanhuiri) and two commercial frost susceptible genotypes (Solanum tuberosum ssp. tuberosum var. Alpha and Solanum tuberosum ssp. andigenum var. Gendarme). The objective of the study was to conduct a comparative growth analysis of four genotypes and modeling their agronomic response under frost events. It included assessing their performance under Andean contrasting agroecological conditions. Independent subsets of data from four field experiments were used to parameterize, calibrate and validate a potato growth model. The validated model was used to ascertain the importance of biodiversity, represented by the four genotypes tested, as constituents of germplasm mixtures in single plots used by local farmers, a coping strategy in the face of climate variability. Also scenarios with a frost routine incorporated in the model were constructed. Luki and Ajanhuiri were the most frost resistant varieties whereas Alpha was the most susceptible. Luki and Ajanhuiri, as monoculture, outperformed the yield obtained with the mixtures under severe frosts. These results highlight the role played by local frost tolerant varieties, and featured the management importance –e.g. clean seed, strategic watering- to attain the yields reported in our experiments. The mixtures of local and introduced potatoes can thus not only provide the products demanded by the markets but also reduce the impact of frosts and thus the vulnerability of the system to abiotic stressors. PMID:24497912
Managing potato biodiversity to cope with frost risk in the high Andes: a modeling perspective.
Condori, Bruno; Hijmans, Robert J; Ledent, Jean Francois; Quiroz, Roberto
2014-01-01
Austral summer frosts in the Andean highlands are ubiquitous throughout the crop cycle, causing yield losses. In spite of the existing warming trend, climate change models forecast high variability, including freezing temperatures. As the potato center of origin, the region has a rich biodiversity which includes a set of frost resistant genotypes. Four contrasting potato genotypes--representing genetic variability--were considered in the present study: two species of frost resistant native potatoes (the bitter Solanum juzepczukii, var. Luki, and the non-bitter Solanum ajanhuiri, var. Ajanhuiri) and two commercial frost susceptible genotypes (Solanum tuberosum ssp. tuberosum var. Alpha and Solanum tuberosum ssp. andigenum var. Gendarme). The objective of the study was to conduct a comparative growth analysis of four genotypes and modeling their agronomic response under frost events. It included assessing their performance under Andean contrasting agroecological conditions. Independent subsets of data from four field experiments were used to parameterize, calibrate and validate a potato growth model. The validated model was used to ascertain the importance of biodiversity, represented by the four genotypes tested, as constituents of germplasm mixtures in single plots used by local farmers, a coping strategy in the face of climate variability. Also scenarios with a frost routine incorporated in the model were constructed. Luki and Ajanhuiri were the most frost resistant varieties whereas Alpha was the most susceptible. Luki and Ajanhuiri, as monoculture, outperformed the yield obtained with the mixtures under severe frosts. These results highlight the role played by local frost tolerant varieties, and featured the management importance--e.g. clean seed, strategic watering--to attain the yields reported in our experiments. The mixtures of local and introduced potatoes can thus not only provide the products demanded by the markets but also reduce the impact of frosts and thus the vulnerability of the system to abiotic stressors.
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models
Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.
2018-01-01
Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.
López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina
2017-05-05
In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Investigation of Profiles of Risk Factors for Adolescent Psychopathology: A Person-Centered Approach
ERIC Educational Resources Information Center
Parra, Gilbert R.; DuBois, David L.; Sher, Kenneth J.
2006-01-01
Latent variable mixture modeling was used to identify subgroups of adolescents with distinct profiles of risk factors from individual, family, peer, and broader contextual domains. Data were drawn from the National Longitudinal Study of Adolescent Health. Four-class models provided the most theoretically meaningful solutions for both 7th (n = 907;…
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial
Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew
2011-01-01
This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
PLUME-MoM 1.0: a new 1-D model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-05-01
In this paper a new mathematical model for volcanic plumes, named PlumeMoM, is presented. The model describes the steady-state 1-D dynamics of the plume in a 3-D coordinate system, accounting for continuous variability in particle distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. Proper description of such a multiparticle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of properties of the continuous size-distribution of the particles. This is achieved by formulation of fundamental transport equations for the multiparticle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables investigation of the response of four key output variables (mean and standard deviation (SD) of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and SD) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution.
Mixture toxicity revisited from a toxicogenomic perspective.
Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I
2012-03-06
The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to establish quantitative relationships between dose and time dependency of responses and (ii) to adopt mixture toxicity models. Moreover, (iii) utilization of novel bioinformatic tools and (iv) stress response concepts could be productive to translate multiple responses into hypotheses on the relationships between general stress and specific toxicity reactions of organisms.
Bechshøft, T Ø; Sonne, C; Dietz, R; Born, E W; Muir, D C G; Letcher, R J; Novak, M A; Henchey, E; Meyer, J S; Jenssen, B M; Villanger, G D
2012-07-01
The multivariate relationship between hair cortisol, whole blood thyroid hormones, and the complex mixtures of organohalogen contaminant (OHC) levels measured in subcutaneous adipose of 23 East Greenland polar bears (eight males and 15 females, all sampled between the years 1999 and 2001) was analyzed using projection to latent structure (PLS) regression modeling. In the resulting PLS model, most important variables with a negative influence on cortisol levels were particularly BDE-99, but also CB-180, -201, BDE-153, and CB-170/190. The most important variables with a positive influence on cortisol were CB-66/95, α-HCH, TT3, as well as heptachlor epoxide, dieldrin, BDE-47, p,p'-DDD. Although statistical modeling does not necessarily fully explain biological cause-effect relationships, relationships indicate that (1) the hypothalamic-pituitary-adrenal (HPA) axis in East Greenland polar bears is likely to be affected by OHC-contaminants and (2) the association between OHCs and cortisol may be linked with the hypothalamus-pituitary-thyroid (HPT) axis. Copyright © 2012 Elsevier Inc. All rights reserved.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
Gas dynamics and mixture formation in swirled flows with precession of air flow
NASA Astrophysics Data System (ADS)
Tretyakov, V. V.; Sviridenkov, A. A.
2017-10-01
The effect of precessing air flow on the processes of mixture formation in the wake of the front winding devices of the combustion chambers is considered. Visual observations have shown that at different times the shape of the atomized jet is highly variable and has signs of precessing motion. The experimental data on the distribution of the velocity and concentration fields of the droplet fuel in the working volume of the flame tube of a typical combustion chamber are obtained. The method of calculating flows consisted in integrating the complete system of Reynolds equations written in Euler variables and closed with the two-parameter model of turbulence k-ε. Calculation of the concentration fields of droplet and vapor fuel is based on the use of models for disintegration into droplets of fuel jets, fragmentation of droplets and analysis of motion and evaporation of individual droplets in the air flow. Comparison of the calculation results with experimental data showed their good agreement.
Growth Mixture Modeling of Academic Achievement in Children of Varying Birth Weight Risk
Espy, Kimberly Andrews; Fang, Hua; Charak, David; Minich, Nori; Taylor, H. Gerry
2009-01-01
The extremes of birth weight and preterm birth are known to result in a host of adverse outcomes, yet studies to date largely have used cross-sectional designs and variable-centered methods to understand long-term sequelae. Growth mixture modeling (GMM) that utilizes an integrated person- and variable-centered approach was applied to identify latent classes of achievement from a cohort of school-age children born at varying birth weights. GMM analyses revealed two latent achievement classes for calculation, problem-solving, and decoding abilities. The classes differed substantively and persistently in proficiency and in growth trajectories. Birth weight was a robust predictor of class membership for the two mathematics achievement outcomes and a marginal predictor of class membership for decoding. Neither visuospatial-motor skills nor environmental risk at study entry added to class prediction for any of the achievement skills. Among children born preterm, neonatal medical variables predicted class membership uniquely beyond birth weight. More generally, GMM is useful in revealing coherence in the developmental patterns of academic achievement in children of varying weight at birth, and is well suited to investigations of sources of heterogeneity. PMID:19586210
Variable selection in a flexible parametric mixture cure model with interval-censored data.
Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim
2016-03-30
In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
NASA Technical Reports Server (NTRS)
Gernhardt, Michael L.; Abercromby, Andrew F.
2009-01-01
This slide presentation reviews the use of variable pressure suits, intermittent recompression and Nitrox breathing mixtures to allow for multiple short extravehicular activities (EVAs) at different locations in a day. This new operational concept of multiple short EVAs requires short purge times and shorter prebreathes to assure rapid egress with a minimal loss of the vehicular air. Preliminary analysis has begun to evaluate the potential benefits of the intermittent recompression, and Nitrox breathing mixtures when used with variable pressure suits to enable reduce purges and prebreathe durations.
Interfacial tension and vapor-liquid equilibria in the critical region of mixtures
NASA Technical Reports Server (NTRS)
Moldover, Michael R.; Rainwater, James C.
1988-01-01
In the critical region, the concept of two-scale-factor universality can be used to accurately predict the surface tension between near-critical vapor and liquid phases from the singularity in the thermodynamic properties of the bulk fluid. In the present work, this idea is generalized to binary mixtures and is illustrated using the data of Hsu et al. (1985) for CO2 + n-butane. The pressure-temperature-composition-density data for coexisting, near-critical phases of the mixtures are fitted with a thermodynamic potential comprised of a sum of a singular term and nonsingular terms. The nonuniversal amplitudes characterizing the singular term for the mixtures are obtained from the amplitudes for the pure components by interpolation in a space of thermodynamic 'field' variables. The interfacial tensions predicted for the mixtures from the singular term are within 10 percent of the data on three isotherms in the pressure range (Pc - P)/Pc of less than 0.5. This difference is comparable to the combined experimental and model errors.
Scale Reliability Evaluation with Heterogeneous Populations
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Modeling the phase behavior of H2S+n-alkane binary mixtures using the SAFT-VR+D approach.
dos Ramos, M Carolina; Goff, Kimberly D; Zhao, Honggang; McCabe, Clare
2008-08-07
A statistical associating fluid theory for potential of variable range has been recently developed to model dipolar fluids (SAFT-VR+D) [Zhao and McCabe, J. Chem. Phys. 2006, 125, 104504]. The SAFT-VR+D equation explicitly accounts for dipolar interactions and their effect on the thermodynamics and structure of a fluid by using the generalized mean spherical approximation (GMSA) to describe a reference fluid of dipolar square-well segments. In this work, we apply the SAFT-VR+D approach to real mixtures of dipolar fluids. In particular, we examine the high-pressure phase diagram of hydrogen sulfide+n-alkane binary mixtures. Hydrogen sulfide is modeled as an associating spherical molecule with four off-center sites to mimic hydrogen bonding and an embedded dipole moment (micro) to describe the polarity of H2S. The n-alkane molecules are modeled as spherical segments tangentially bonded together to form chains of length m, as in the original SAFT-VR approach. By using simple Lorentz-Berthelot combining rules, the theoretical predictions from the SAFT-VR+D equation are found to be in excellent overall agreement with experimental data. In particular, the theory is able to accurately describe the different types of phase behavior observed for these mixtures as the molecular weight of the alkane is varied: type III phase behavior, according to the scheme of classification by Scott and Konynenburg, for the H2S+methane system, type IIA (with the presence of azeotropy) for the H2S+ethane and+propane mixtures; and type I phase behavior for mixtures of H2S and longer n-alkanes up to n-decane. The theory is also able to predict in a qualitative manner the solubility of hydrogen sulfide in heavy n-alkanes.
Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.
Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A
2015-07-01
Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Robertson, K. M.; Milliken, R. E.; Li, S.
2016-10-01
Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.
On the measurement of stability in over-time data.
Kenny, D A; Campbell, D T
1989-06-01
In this article, autoregressive models and growth curve models are compared. Autoregressive models are useful because they allow for random change, permit scores to increase or decrease, and do not require strong assumptions about the level of measurement. Three previously presented designs for estimating stability are described: (a) time-series, (b) simplex, and (c) two-wave, one-factor methods. A two-wave, multiple-factor model also is presented, in which the variables are assumed to be caused by a set of latent variables. The factor structure does not change over time and so the synchronous relationships are temporally invariant. The factors do not cause each other and have the same stability. The parameters of the model are the factor loading structure, each variable's reliability, and the stability of the factors. We apply the model to two data sets. For eight cognitive skill variables measured at four times, the 2-year stability is estimated to be .92 and the 6-year stability is .83. For nine personality variables, the 3-year stability is .68. We speculate that for many variables there are two components: one component that changes very slowly (the trait component) and another that changes very rapidly (the state component); thus each variable is a mixture of trait and state. Circumstantial evidence supporting this view is presented.
Assessing the external validity of algorithms to estimate EQ-5D-3L from the WOMAC.
Kiadaliri, Aliasghar A; Englund, Martin
2016-10-04
The use of mapping algorithms have been suggested as a solution to predict health utilities when no preference-based measure is included in the study. However, validity and predictive performance of these algorithms are highly variable and hence assessing the accuracy and validity of algorithms before use them in a new setting is of importance. The aim of the current study was to assess the predictive accuracy of three mapping algorithms to estimate the EQ-5D-3L from the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) among Swedish people with knee disorders. Two of these algorithms developed using ordinary least squares (OLS) models and one developed using mixture model. The data from 1078 subjects mean (SD) age 69.4 (7.2) years with frequent knee pain and/or knee osteoarthritis from the Malmö Osteoarthritis study in Sweden were used. The algorithms' performance was assessed using mean error, mean absolute error, and root mean squared error. Two types of prediction were estimated for mixture model: weighted average (WA), and conditional on estimated component (CEC). The overall mean was overpredicted by an OLS model and underpredicted by two other algorithms (P < 0.001). All predictions but the CEC predictions of mixture model had a narrower range than the observed scores (22 to 90 %). All algorithms suffered from overprediction for severe health states and underprediction for mild health states with lesser extent for mixture model. While the mixture model outperformed OLS models at the extremes of the EQ-5D-3D distribution, it underperformed around the center of the distribution. While algorithm based on mixture model reflected the distribution of EQ-5D-3L data more accurately compared with OLS models, all algorithms suffered from systematic bias. This calls for caution in applying these mapping algorithms in a new setting particularly in samples with milder knee problems than original sample. Assessing the impact of the choice of these algorithms on cost-effectiveness studies through sensitivity analysis is recommended.
A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons
2001-07-01
parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for
Person Re-Identification via Distance Metric Learning With Latent Variables.
Sun, Chong; Wang, Dong; Lu, Huchuan
2017-01-01
In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.
Martin, Jean-Charles; Berton, Amélie; Ginies, Christian; Bott, Romain; Scheercousse, Pierre; Saddi, Alessandra; Gripois, Daniel; Landrier, Jean-François; Dalemans, Daniel; Alessi, Marie-Christine; Delplanque, Bernadette
2015-09-01
We assessed the atheroprotective efficiency of modified dairy fats in hyperlipidemic hamsters. A systems biology approach was implemented to reveal and quantify the dietary fat-related components of the disease. Three modified dairy fats (40% energy) were prepared from regular butter by mixing with a plant oil mixture, by removing cholesterol alone, or by removing cholesterol in combination with reducing saturated fatty acids. A plant oil mixture and a regular butter were used as control diets. The atherosclerosis severity (aortic cholesteryl-ester level) was higher in the regular butter-fed hamsters than in the other four groups (P < 0.05). Eighty-seven of the 1,666 variables measured from multiplatform analysis were found to be strongly associated with the disease. When aggregated into 10 biological clusters combined into a multivariate predictive equation, these 87 variables explained 81% of the disease variability. The biological cluster "regulation of lipid transport and metabolism" appeared central to atherogenic development relative to diets. The "vitamin E metabolism" cluster was the main driver of atheroprotection with the best performing transformed dairy fat. Under conditions that promote atherosclerosis, the impact of dairy fats on atherogenesis could be greatly ameliorated by technological modifications. Our modeling approach allowed for identifying and quantifying the contribution of complex factors to atherogenic development in each dietary setup. Copyright © 2015 the American Physiological Society.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Rahmati, Nazanin Fatemeh; Mazaheri Tehrani, Mostafa
2014-09-01
Emulsifiers of different structures and functionalities are important ingredients usually used in baking cakes with satisfactory properties. In this study, three emulsifiers including distilled glycerol mono stearate (DGMS), lecithin and sorbitan mono stearate (SMS) were used to bake seven eggless cakes containing soy milk and optimization was performed by using mixture experimental design to produce an eggless cake sample with optimized properties. Physical properties of cake batters (viscosity, specific gravity and stability), cake quality parameters (moisture loss, density, specific volume, volume index, contour, symmetry, color and texture) and sensory attributes of eggless cakes were analyzed to investigate functional potential of the emulsifiers and results were compared with those of control cake containing egg. Almost in all cases emulsifiers, compared to the control cake, changed properties of eggless cakes significantly. Regarding models of different response variables (except for some properties) and their high R(2) (99.51-100), it could be concluded that models obtained by mixture design were significantly fitted for the studied responses.
NASA Astrophysics Data System (ADS)
Gulliver, Eric A.
The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda
2014-12-19
The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel frameworkmore » for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.« less
Modern Methods for Modeling Change in Obesity Research in Nursing.
Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E
2017-08-01
Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.
Effect of stirring on the safety of flammable liquid mixtures.
Liaw, Horng-Jang; Gerbaud, Vincent; Chen, Chan-Cheng; Shu, Chi-Min
2010-05-15
Flash point is the most important variable employed to characterize fire and explosion hazard of liquids. The models developed for predicting the flash point of partially miscible mixtures in the literature to date are all based on the assumption of liquid-liquid equilibrium. In real-world environments, however, the liquid-liquid equilibrium assumption does not always hold, such as the collection or accumulation of waste solvents without stirring, where complete stirring for a period of time is usually used to ensure the liquid phases being in equilibrium. This study investigated the effect of stirring on the flash-point behavior of binary partially miscible mixtures. Two series of partially miscible binary mixtures were employed to elucidate the effect of stirring. The first series was aqueous-organic mixtures, including water+1-butanol, water+2-butanol, water+isobutanol, water+1-pentanol, and water+octane; the second series was the mixtures of two flammable solvents, which included methanol+decane, methanol+2,2,4-trimethylpentane, and methanol+octane. Results reveal that for binary aqueous-organic solutions the flash-point values of unstirred mixtures were located between those of the completely stirred mixtures and those of the flammable component. Therefore, risk assessment could be done based on the flammable component flash-point value. However, for the assurance of safety, it is suggested to completely stir those mixtures before handling to reduce the risk. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Mixture experiment methods in the development and optimization of microemulsion formulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furlanetto, Sandra; Cirri, Marzia; Piepel, Gregory F.
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil, and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. Themore » results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1 v/v), 5% oil (Labrafac Hydro) and 17% aqueous (water). The stable region of MEs was identified using mixture experiment methods for the first time.« less
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
Iverson, R.M.; ,
2003-01-01
Models that employ a fixed rheology cannot yield accurate interpretations or predictions of debris-flow motion, because the evolving behavior of debris flows is too complex to be represented by any rheological equation that uniquely relates stress and strain rate. Field observations and experimental data indicate that debris behavior can vary from nearly rigid to highly fluid as a consequence of temporal and spatial variations in pore-fluid pressure and mixture agitation. Moreover, behavior can vary if debris composition changes as a result of grain-size segregation and gain or loss of solid and fluid constituents in transit. An alternative to fixed-rheology models is provided by a Coulomb mixture theory model, which can represent variable interactions of solid and fluid constituents in heterogeneous debris-flow surges with high-friction, coarse-grained heads and low-friction, liquefied tails. ?? 2003 Millpress.
Igne, Benoit; Shi, Zhenqi; Drennen, James K; Anderson, Carl A
2014-02-01
The impact of raw material variability on the prediction ability of a near-infrared calibration model was studied. Calibrations, developed from a quaternary mixture design comprising theophylline anhydrous, lactose monohydrate, microcrystalline cellulose, and soluble starch, were challenged by intentional variation of raw material properties. A design with two theophylline physical forms, three lactose particle sizes, and two starch manufacturers was created to test model robustness. Further challenges to the models were accomplished through environmental conditions. Along with full-spectrum partial least squares (PLS) modeling, variable selection by dynamic backward PLS and genetic algorithms was utilized in an effort to mitigate the effects of raw material variability. In addition to evaluating models based on their prediction statistics, prediction residuals were analyzed by analyses of variance and model diagnostics (Hotelling's T(2) and Q residuals). Full-spectrum models were significantly affected by lactose particle size. Models developed by selecting variables gave lower prediction errors and proved to be a good approach to limit the effect of changing raw material characteristics. Hotelling's T(2) and Q residuals provided valuable information that was not detectable when studying only prediction trends. Diagnostic statistics were demonstrated to be critical in the appropriate interpretation of the prediction of quality parameters. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
Using cure models for analyzing the influence of pathogens on salmon survival
Ray, Adam R; Perry, Russell W.; Som, Nicholas A.; Bartholomew, Jerri L
2014-01-01
Parasites and pathogens influence the size and stability of wildlife populations, yet many population models ignore the population-level effects of pathogens. Standard survival analysis methods (e.g., accelerated failure time models) are used to assess how survival rates are influenced by disease. However, they assume that each individual is equally susceptible and will eventually experience the event of interest; this assumption is not typically satisfied with regard to pathogens of wildlife populations. In contrast, mixture cure models, which comprise logistic regression and survival analysis components, allow for different covariates to be entered into each part of the model and provide better predictions of survival when a fraction of the population is expected to survive a disease outbreak. We fitted mixture cure models to the host–pathogen dynamics of Chinook Salmon Oncorhynchus tshawytscha and Coho Salmon O. kisutch and the myxozoan parasite Ceratomyxa shasta. Total parasite concentration, water temperature, and discharge were used as covariates to predict the observed parasite-induced mortality in juvenile salmonids collected as part of a long-term monitoring program in the Klamath River, California. The mixture cure models predicted the observed total mortality well, but some of the variability in observed mortality rates was not captured by the models. Parasite concentration and water temperature were positively associated with total mortality and the mortality rate of both Chinook Salmon and Coho Salmon. Discharge was positively associated with total mortality for both species but only affected the mortality rate for Coho Salmon. The mixture cure models provide insights into how daily survival rates change over time in Chinook Salmon and Coho Salmon after they become infected with C. shasta.
Controllability of control and mixture weakly dependent siphons in S3PR
NASA Astrophysics Data System (ADS)
Hong, Liang; Chao, Daniel Y.
2013-08-01
Deadlocks in a flexible manufacturing system modelled by Petri nets arise from insufficiently marked siphons. Monitors are added to control these siphons to avoid deadlocks rendering the system too complicated since the total number of monitors grows exponentially. Li and Zhou propose to add monitors only to elementary siphons while controlling the other (strongly or weakly) dependent siphons by adjusting control depth variables. To avoid generating new siphons, the control arcs are ended at source transitions of process nets. This disturbs the original model more and hence loses more live states. Negative terms in the controllability make the control policy for weakly dependent siphons rather conservative. We studied earlier on the controllability of strongly dependent siphons and proposed to add monitors in the order of basic, compound, control, partial mixture and full mixture (strongly dependent) siphons to reduce the number of mixed integer programming iterations and redundant monitors. This article further investigates the controllability of siphons derived from weakly 2-compound siphons. We discover that the controllability for weakly and strongly compound siphons is similar. It no longer holds for control and mixture siphons. Some control and mixture siphons, derived from strongly 2-compound siphons are not redundant - no longer so for those derived from weakly 2-compound siphons; that is all control and mixture siphons are redundant. They do not need to be the conservative one as proposed by Li and Zhou. Thus, we can adopt the maximally permissive control policy even though new siphons are generated.
Spherically symmetric Einstein-aether perfect fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coley, Alan A.; Latta, Joey; Leon, Genly
We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less
Roush, W B; Boykin, D; Branton, S L
2004-08-01
A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Zhang, Xia; Hu, Changqin
2017-09-08
Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Polizzotti, Brian D; Thomson, Lindsay M; O'Connell, Daniel W; McGowan, Francis X; Kheir, John N
2014-08-01
Tissue hypoxia is a final common pathway that leads to cellular injury and death in a number of critical illnesses. Intravenous injections of self-assembling, lipid-based oxygen microbubbles (LOMs) can be used to deliver oxygen gas, preventing organ injury and death from systemic hypoxemia. However, current formulations exhibit high polydispersity indices (which may lead to microvascular obstruction) and poor shelf-lives, limiting the translational capacity of LOMs. In this study, we report our efforts to optimize LOM formulations using a mixture response surface methodology (mRSM). We study the effect of changing excipient proportions (the independent variables) on microbubble diameter and product loss (the dependent variables). By using mRSM analysis, the experimental data were fit using a reduced Scheffé linear mixture model. We demonstrate that formulations manufactured from 1,2-distearoyl-sn-glycero-3-phosphocholine, corn syrup, and water produce micron-sized microbubbles with low polydispersity indices, and decreased product loss (relative to previously described formulations) when stored at room temperature over a 30-day period. Optimized LOMs were subsequently tested for their oxygen-releasing ability and found to have similar release kinetics as prior formulations. © 2014 Wiley Periodicals, Inc.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
NASA Astrophysics Data System (ADS)
Bala, N.; Napiah, M.; Kamaruddin, I.; Danlami, N.
2018-04-01
In this study, modelling and optimization of materials polyethylene, polypropylene and nanosilica for nanocomposite modified asphalt mixtures has been examined to obtain optimum quantities for higher fatique life. Response Surface Methodology (RSM) was applied for the optimization based on Box Behnken design (BBD). Interaction effects of independent variables polymers and nanosilica on fatique life were evaluated. The result indicates that the individual effects of polymers and nanosilica content are both important. However, the content of nanosilica used has more significant effect on fatique life resistance. Also, the mean error obtained from optimization results is less than 5% for all the responses, this indicates that predicted values are in agreement with experimental results. Furthermore, it was concluded that asphalt mixture design with high performance properties, optimization using RSM is a very effective approach.
NASA Astrophysics Data System (ADS)
Xie, Dexuan; Jiang, Yi
2018-05-01
This paper reports a nonuniform ionic size nonlocal Poisson-Fermi double-layer model (nuNPF) and a uniform ionic size nonlocal Poisson-Fermi double-layer model (uNPF) for an electrolyte mixture of multiple ionic species, variable voltages on electrodes, and variable induced charges on boundary segments. The finite element solvers of nuNPF and uNPF are developed and applied to typical double-layer tests defined on a rectangular box, a hollow sphere, and a hollow rectangle with a charged post. Numerical results show that nuNPF can significantly improve the quality of the ionic concentrations and electric fields generated from uNPF, implying that the effect of nonuniform ion sizes is a key consideration in modeling the double-layer structure.
Mixture models for protein structure ensembles.
Hirsch, Michael; Habeck, Michael
2008-10-01
Protein structure ensembles provide important insight into the dynamics and function of a protein and contain information that is not captured with a single static structure. However, it is not clear a priori to what extent the variability within an ensemble is caused by internal structural changes. Additional variability results from overall translations and rotations of the molecule. And most experimental data do not provide information to relate the structures to a common reference frame. To report meaningful values of intrinsic dynamics, structural precision, conformational entropy, etc., it is therefore important to disentangle local from global conformational heterogeneity. We consider the task of disentangling local from global heterogeneity as an inference problem. We use probabilistic methods to infer from the protein ensemble missing information on reference frames and stable conformational sub-states. To this end, we model a protein ensemble as a mixture of Gaussian probability distributions of either entire conformations or structural segments. We learn these models from a protein ensemble using the expectation-maximization algorithm. Our first model can be used to find multiple conformers in a structure ensemble. The second model partitions the protein chain into locally stable structural segments or core elements and less structured regions typically found in loops. Both models are simple to implement and contain only a single free parameter: the number of conformers or structural segments. Our models can be used to analyse experimental ensembles, molecular dynamics trajectories and conformational change in proteins. The Python source code for protein ensemble analysis is available from the authors upon request.
Mixture experiment methods in the development and optimization of microemulsion formulations.
Furlanetto, S; Cirri, M; Piepel, G; Mennini, N; Mura, P
2011-06-25
Microemulsion formulations represent an interesting delivery vehicle for lipophilic drugs, allowing for improving their solubility and dissolution properties. This work developed effective microemulsion formulations using glyburide (a very poorly-water-soluble hypoglycaemic agent) as a model drug. First, the area of stable microemulsion (ME) formations was identified using a new approach based on mixture experiment methods. A 13-run mixture design was carried out in an experimental region defined by constraints on three components: aqueous, oil and surfactant/cosurfactant. The transmittance percentage (at 550 nm) of ME formulations (indicative of their transparency and thus of their stability) was chosen as the response variable. The results obtained using the mixture experiment approach corresponded well with those obtained using the traditional approach based on pseudo-ternary phase diagrams. However, the mixture experiment approach required far less experimental effort than the traditional approach. A subsequent 13-run mixture experiment, in the region of stable MEs, was then performed to identify the optimal formulation (i.e., having the best glyburide dissolution properties). Percent drug dissolved and dissolution efficiency were selected as the responses to be maximized. The ME formulation optimized via the mixture experiment approach consisted of 78% surfactant/cosurfacant (a mixture of Tween 20 and Transcutol, 1:1, v/v), 5% oil (Labrafac Hydro) and 17% aqueous phase (water). The stable region of MEs was identified using mixture experiment methods for the first time. Copyright © 2011 Elsevier B.V. All rights reserved.
Appropriate statistical analyses are critical for evaluating interactions of mixtures with a common mode of action, as is often the case for cumulative risk assessments. Our objective is to develop analyses for use when a response variable is ordinal, and to test for interaction...
ERIC Educational Resources Information Center
Donoghue, John R.
A Monte Carlo study compared the usefulness of six variable weighting methods for cluster analysis. Data were 100 bivariate observations from 2 subgroups, generated according to a finite normal mixture model. Subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. Of the clustering…
Modeling eutrophic lakes: From mass balance laws to ordinary differential equations
NASA Astrophysics Data System (ADS)
Marasco, Addolorata; Ferrara, Luciano; Romano, Antonio
Starting from integral balance laws, a model based on nonlinear ordinary differential equations (ODEs) describing the evolution of Phosphorus cycle in a lake is proposed. After showing that the usual homogeneous model is not compatible with the mixture theory, we prove that an ODEs model still holds but for the mean values of the state variables provided that the nonhomogeneous involved fields satisfy suitable conditions. In this model the trophic state of a lake is described by the mean densities of Phosphorus in water and sediments, and phytoplankton biomass. All the quantities appearing in the model can be experimentally evaluated. To propose restoration programs, the evolution of these state variables toward stable steady state conditions is analyzed. Moreover, the local stability analysis is performed with respect to all the model parameters. Some numerical simulations and a real application to lake Varese conclude the paper.
NASA Astrophysics Data System (ADS)
Hutter, Kolumban; Schneider, Lukas
2010-06-01
This article points at some critical issues which are connected with the theoretical formulation of the thermodynamics of solid-fluid mixtures of frictional materials. It is our view that a complete thermodynamic exploitation of the second law of thermodynamics is necessary to obtain the proper parameterizations of the constitutive quantities in such theories. These issues are explained in detail in a recently published book by Schneider and Hutter (Solid-Fluid Mixtures of Frictional Materials in Geophysical and Geotechnical Context, 2009), which we wish to advertize with these notes. The model is a saturated mixture of an arbitrary number of solid and fluid constituents which may be compressible or density preserving, which exhibit visco-frictional (visco-hypoplastic) behavior, but are all subject to the same temperature. Mass exchange between the constituents may account for particle size separation and phase changes due to fragmentation and abrasion. Destabilization of a saturated soil mass from the pre- and the post-critical phases of a catastrophic motion from initiation to deposition is modeled by symmetric tensorial variables which are related to the rate independent parts of the constituent stress tensors.
Diversifying mechanisms in the on-farm evolution of crop mixtures.
Thomas, Mathieu; Thépot, Stéphanie; Galic, Nathalie; Jouanne-Pin, Sophie; Remoué, Carine; Goldringer, Isabelle
2015-06-01
While modern agriculture relies on genetic homogeneity, diversifying practices associated with seed exchange and seed recycling may allow crops to adapt to their environment. This socio-genetic model is an original experimental evolution design referred to as on-farm dynamic management of crop diversity. Investigating such model can help in understanding how evolutionary mechanisms shape crop diversity submitted to diverse agro-environments. We studied a French farmer-led initiative where a mixture of four wheat landraces called 'Mélange de Touselles' (MDT) was created and circulated within a farmers' network. The 15 sampled MDT subpopulations were simultaneously submitted to diverse environments (e.g. altitude, rainfall) and diverse farmers' practices (e.g. field size, sowing and harvesting date). Twenty-one space-time samples of 80 individuals each were genotyped using 17 microsatellite markers and characterized for their heading date in a 'common-garden' experiment. Gene polymorphism was studied using four markers located in earliness genes. An original network-based approach was developed to depict the particular and complex genetic structure of the landraces composing the mixture. Rapid differentiation among populations within the mixture was detected, larger at the phenotypic and gene levels than at the neutral genetic level, indicating potential divergent selection. We identified two interacting selection processes: variation in the mixture component frequencies, and evolution of within-variety diversity, that shaped the standing variability available within the mixture. These results confirmed that diversifying practices and environments maintain genetic diversity and allow for crop evolution in the context of global change. Including concrete measurements of farmers' practices is critical to disentangle crop evolution processes. © 2015 John Wiley & Sons Ltd.
Mooneyham, T.; Jeyaratnam, J.; Schultz, T. W.; Pöch, G.
2011-01-01
Four ethyl α-halogenated acetates were tested in (1) sham and (2) nonsham combinations and (3) with a nonreactive nonpolar narcotic. Ethyl iodoacetate (EIAC), ethyl bromoacetate (EBAC), ethyl chloroacetate (ECAC), and ethyl fluoroacetate (EFAC), each considered to be an SN2-H-polar soft electrophile, were selected for testing based on their differences in electro(nucleo)philic reactivity and time-dependent toxicity (TDT). Agent reactivity was assessed using the model nucleophile glutathione, with EIAC and EBAC showing rapid reactivity, ECAC being less reactive, and EFAC lacking reactivity at ≤250 mM. The model nonpolar narcotic, 3-methyl-2-butanone (3M2B), was not reactive. Toxicity of the agents alone and in mixture was assessed using the Microtox acute toxicity test at three exposure durations: 15, 30 and 45 min. Two of the agents alone (EIAC and EBAC) had TDT values >100%. In contrast, ECAC (74 to 99%) and EFAC (9 to 12%) had partial TDT, whereas 3M2B completely lacked TDT (<0%). In mixture testing, sham combinations of each agent showed a combined effect consistent with predicted effects for dose-addition at each time point, as judged by EC50 dose-addition quotient values. Mixture toxicity results for nonsham ethyl acetate combinations were variable, with some mixtures being inconsistent with the predicted effects for dose-addition and/or independence. The ethyl acetate–3M2B combinations were somewhat more toxic than predicted for dose-addition, a finding differing from that observed previously for α-halogenated acetonitriles with 3M2B. PMID:21452006
Iverson, R.M.; Denlinger, R.P.
2001-01-01
Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces, govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, threedimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.
NASA Astrophysics Data System (ADS)
Iverson, Richard M.; Denlinger, Roger P.
2001-01-01
Rock avalanches, debris flows, and related phenomena consist of grain-fluid mixtures that move across three-dimensional terrain. In all these phenomena the same basic forces govern motion, but differing mixture compositions, initial conditions, and boundary conditions yield varied dynamics and deposits. To predict motion of diverse grain-fluid masses from initiation to deposition, we develop a depth-averaged, three-dimensional mathematical model that accounts explicitly for solid- and fluid-phase forces and interactions. Model input consists of initial conditions, path topography, basal and internal friction angles of solid grains, viscosity of pore fluid, mixture density, and a mixture diffusivity that controls pore pressure dissipation. Because these properties are constrained by independent measurements, the model requires little or no calibration and yields readily testable predictions. In the limit of vanishing Coulomb friction due to persistent high fluid pressure the model equations describe motion of viscous floods, and in the limit of vanishing fluid stress they describe one-phase granular avalanches. Analysis of intermediate phenomena such as debris flows and pyroclastic flows requires use of the full mixture equations, which can simulate interaction of high-friction surge fronts with more-fluid debris that follows. Special numerical methods (described in the companion paper) are necessary to solve the full equations, but exact analytical solutions of simplified equations provide critical insight. An analytical solution for translational motion of a Coulomb mixture accelerating from rest and descending a uniform slope demonstrates that steady flow can occur only asymptotically. A solution for the asymptotic limit of steady flow in a rectangular channel explains why shear may be concentrated in narrow marginal bands that border a plug of translating debris. Solutions for static equilibrium of source areas describe conditions of incipient slope instability, and other static solutions show that nonuniform distributions of pore fluid pressure produce bluntly tapered vertical profiles at the margins of deposits. Simplified equations and solutions may apply in additional situations identified by a scaling analysis. Assessment of dimensionless scaling parameters also reveals that miniature laboratory experiments poorly simulate the dynamics of full-scale flows in which fluid effects are significant. Therefore large geophysical flows can exhibit dynamics not evident at laboratory scales.
Application of fuzzy logic in multicomponent analysis by optodes.
Wollenweber, M; Polster, J; Becker, T; Schmidt, H L
1997-01-01
Fuzzy logic can be a useful tool for the determination of substrate concentrations applying optode arrays in combination with flow injection analysis, UV-VIS spectroscopy and kinetics. The transient diffuse reflectance spectra in the visible wavelength region from four optodes were evaluated to carry out the simultaneous determination of artificial mixtures of ampicillin and penicillin. The discrimination of the samples was achieved by changing the composition of the receptor gel and working pH. Different algorithms of pre-processing were applied on the data to reduce the spectral information to a few analytic-specific variables. These variables were used to develop the fuzzy model. After calibration the model was validated by an independent test data set.
Li, Xingang; Gao, Yujie; Ding, Hui
2013-10-01
The lead removal from the metallic mixture of waste printed circuit boards by vacuum distillation was optimized using experimental design, and a mathematical model was established to elucidate the removal mechanism. The variables studied in lead evaporation consisted of the chamber pressure, heating temperature, heating time, particle size and initial mass. The low-level chamber pressure was fixed at 0.1 Pa as the operation pressure. The application of two-level factorial design generated a first-order polynomial that agreed well with the data for evaporation efficiency of lead. The heating temperature and heating time exhibited significant effects on the efficiency, which was validated by means of the copper-lead mixture experiments. The optimized operating conditions within the region studied were the chamber pressure of 0.1 Pa, heating temperature of 1023 K and heating time of 120 min. After the conditions were employed to remove lead from the metallic mixture of waste printed circuit boards, the efficiency was 99.97%. The mechanism of the effects was elucidated by mathematical modeling that deals with evaporation, mass transfer and condensation, and can be applied to a wider range of metal removal by vacuum distillation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Asymptotic modeling of flows of a mixture of two monoatomic gases in a coplanar microchannel
NASA Astrophysics Data System (ADS)
Gatignol, Renée; Croizet, Cédric
2016-11-01
Gas mixtures are present in a number of microsystems, such as heat exchangers, propulsion systems, and so on. This paper aims to describe some basic physical phenomena of flows of a mixture of two monoatomic gases in a coplanar microchannel. Gas flows are described by the Navier-Stokes-Fourier equations with coupling terms, and with first order boundary conditions for the velocities and the temperatures on the microchannel walls. With the small parameter equal to the ratio of the transverse and longitudinal lengths, an asymptotic model was presented at the 29th Symposium on Rarefied Gas Dynamics. It corresponds to a low Mach number and a low to moderate Knudsen number. First-order differential equations for mass, momentum and energy have been written. For each species, the pressure depends only on the longitudinal variable and the temperature is equal to the wall temperature (the two walls have the same temperature). Both pressures are solutions of ordinary differential equations. Results are given on the longitudinal profile of both pressures and on the longitudinal velocities, for different binary mixtures, and for the cases of isothermal and thermal regimes. Asymptotic solutions are compared to DSMC simulations in the same configuration: they are roughly in agreement.
Realized Volatility Analysis in A Spin Model of Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.
A model for predicting thermal properties of asphalt mixtures from their constituents
NASA Astrophysics Data System (ADS)
Keller, Merlin; Roche, Alexis; Lavielle, Marc
Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.
Evaluation of flamelet/progress variable model for laminar pulverized coal combustion
NASA Astrophysics Data System (ADS)
Wen, Xu; Wang, Haiou; Luo, Yujuan; Luo, Kun; Fan, Jianren
2017-08-01
In the present work, the flamelet/progress variable (FPV) approach based on two mixture fractions is formulated for pulverized coal combustion and then evaluated in laminar counterflow coal flames under different operating conditions through both a priori and a posteriori analyses. Two mixture fractions, Zvol and Zchar, are defined to characterize the mixing between the oxidizer and the volatile matter/char reaction products. A coordinate transformation is conducted to map the flamelet solutions from a unit triangle space (Zvol, Zchar) to a unit square space (Z, X) so that a more stable solution can be achieved. To consider the heat transfers between the coal particle phase and the gas phase, the total enthalpy is introduced as an additional manifold. As a result, the thermo-chemical quantities are parameterized as a function of the mixture fraction Z, the mixing parameter X, the normalized total enthalpy Hnorm, and the reaction progress variable YPV. The validity of the flamelet chemtable and the selected trajectory variables is first evaluated in a priori tests by comparing the tabulated quantities with the results obtained from numerical simulations with detailed chemistry. The comparisons show that the major species mass fractions can be predicted by the FPV approach in all combustion regions for all operating conditions, while the CO and H2 mass fractions are over-predicted in the premixed flame reaction zone. The a posteriori study shows that overall good agreement between the FPV results and those obtained from detailed chemistry simulations can be achieved, although the coal particle ignition is predicted to be slightly earlier. Overall, the validity of the FPV approach for laminar pulverized coal combustion is confirmed and its performance in turbulent pulverized coal combustion will be tested in future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattarino, Franco; Piepel, Gregory F.; Rinaldi, Maurizio
The Foglio Bonda et al. (2016) (henceforth FB) paper discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (the response variable of interest). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). After publication of the FB paper, the second author of this corrigendum (not an author of the original paper) contacted the corresponding author to point out some errors as well as insufficient explanations in parts of the paper. This corrigendum was prepared tomore » address these issues. The authors of the original paper apologize for any inconveniences to readers.« less
Variable selection for distribution-free models for longitudinal zero-inflated count responses.
Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M
2016-07-20
Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong
2014-06-01
Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2. Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture's components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3. Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). Specific Aim 1. Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10(-4), and 13% of all participants had risk levels above 10(-2). Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2. Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual's total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10(-3) for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3. In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence's AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. (ABSTRACT TRUNCATED)
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Competency criteria and the class inclusion task: modeling judgments and justifications.
Thomas, H; Horton, J J
1997-11-01
Preschool age children's class inclusion task responses were modeled as mixtures of different probability distributions. The main idea: Different response strategies are equivalent to different probability distributions. A child displays cognitive strategy s if P (child uses strategy s, given the child's observed score X = x) = p(s) is the most probable strategy. The general approach is widely applicable to many settings. Both judgment and justification questions were asked. Judgment response strategies identified were subclass comparison, guessing, and inclusion logic. Children's justifications lagged their judgments in development. Although justification responses may be useful, C. J. Brainerd was largely correct: If a single response variable is to be selected, a judgments variable is likely the preferable one. But the process must be modeled to identify cognitive strategies, as B. Hodkin has demonstrated.
On the Theory of Reactive Mixtures for Modeling Biological Growth
Ateshian, Gerard A.
2013-01-01
Mixture theory, which can combine continuum theories for the motion and deformation of solids and fluids with general principles of chemistry, is well suited for modeling the complex responses of biological tissues, including tissue growth and remodeling, tissue engineering, mechanobiology of cells and a variety of other active processes. A comprehensive presentation of the equations of reactive mixtures of charged solid and fluid constituents is lacking in the biomechanics literature. This study provides the conservation laws and entropy inequality, as well as interface jump conditions, for reactive mixtures consisting of a constrained solid mixture and multiple fluid constituents. The constituents are intrinsically incompressible and may carry an electrical charge. The interface jump condition on the mass flux of individual constituents is shown to define a surface growth equation, which predicts deposition or removal of material points from the solid matrix, complementing the description of volume growth described by the conservation of mass. A formu-lation is proposed for the reference configuration of a body whose material point set varies with time. State variables are defined which can account for solid matrix volume growth and remodeling. Constitutive constraints are provided on the stresses and momentum supplies of the various constituents, as well as the interface jump conditions for the electrochem cal potential of the fluids. Simplifications appropriate for biological tissues are also proposed, which help reduce the governing equations into a more practical format. It is shown that explicit mechanisms of growth-induced residual stresses can be predicted in this framework. PMID:17206407
Robust encoding of stimulus identity and concentration in the accessory olfactory system.
Arnson, Hannah A; Holy, Timothy E
2013-08-14
Sensory systems represent stimulus identity and intensity, but in the neural periphery these two variables are typically intertwined. Moreover, stable detection may be complicated by environmental uncertainty; stimulus properties can differ over time and circumstance in ways that are not necessarily biologically relevant. We explored these issues in the context of the mouse accessory olfactory system, which specializes in detection of chemical social cues and infers myriad aspects of the identity and physiological state of conspecifics from complex mixtures, such as urine. Using mixtures of sulfated steroids, key constituents of urine, we found that spiking responses of individual vomeronasal sensory neurons encode both individual compounds and mixtures in a manner consistent with a simple model of receptor-ligand interactions. Although typical neurons did not accurately encode concentration over a large dynamic range, from population activity it was possible to reliably estimate the log-concentration of pure compounds over several orders of magnitude. For binary mixtures, simple models failed to accurately segment the individual components, largely because of the prevalence of neurons responsive to both components. By accounting for such overlaps during model tuning, we show that, from neuronal firing, one can accurately estimate log-concentration of both components, even when tested across widely varying concentrations. With this foundation, the difference of logarithms, log A - log B = log A/B, provides a natural mechanism to accurately estimate concentration ratios. Thus, we show that a biophysically plausible circuit model can reconstruct concentration ratios from observed neuronal firing, representing a powerful mechanism to separate stimulus identity from absolute concentration.
Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity
Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J.; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S.; Schmidt, Travis S.
2018-01-01
One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters.
Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity.
Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S; Schmidt, Travis S
2018-04-01
One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters. Copyright © 2018 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Adding plant diversity to forage systems may help growers deal with increasing fertilizer costs and a more variable climate. Maintaining highly diverse forage mixtures in forage-livestock production is difficult and may warrant a closer reexamination of simpler grass-legume mixtures to achieve simi...
Effects of a Culturally Adapted HIV Prevention Intervention in Haitian Youth
Malow, Robert M.; Stein, Judith A.; McMahon, Robert C.; Dévieux, Jessy G.; Rosenberg, Rhonda; Jean-Gilles, Michèle
2009-01-01
This study assessed the impact of an 8-week community-based translation of Becoming a Responsible Teen (BART), an HIV intervention that has been shown to be effective in other at-risk adolescent populations. A sample of Haitian adolescents living in the Miami area was randomized to a general health education control group (N = 101) or the BART intervention (N = 145), which is based on the information-motivation-behavior (IMB) model. Improvement in various IMB components (i.e., attitudinal, knowledge, and behavioral skills variables) related to condom use was assessed 1 month after the intervention. Longitudinal structural equation models using a mixture of latent and measured multi-item variables indicated that the intervention significantly and positively impacted all IMB variables tested in the model. These BART intervention-linked changes reflected greater knowledge, greater intentions to use condoms in the future, higher safer sex self-efficacy, an improved attitude about condom use and an enhanced ability to use condoms after the 8-week intervention. PMID:19286123
A Bayesian approach for convex combination of two Gumbel-Barnett copulas
NASA Astrophysics Data System (ADS)
Fernández, M.; González-López, V. A.
2013-10-01
In this paper it was applied a new Bayesian approach to model the dependence between two variables of interest in public policy: "Gonorrhea Rates per 100,000 Population" and "400% Federal Poverty Level and over" with a small number of paired observations (one pair for each U.S. state). We use a mixture of Gumbel-Barnett copulas suitable to represent situations with weak and negative dependence, which is the case treated here. The methodology allows even making a prediction of the dependence between the variables from one year to another, showing whether there was any alteration in the dependence.
A Diffuse Interface Model with Immiscibility Preservation
Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos
2013-01-01
A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207
1996-01-01
We developed and evaluated a total toxic units modeling approach for predicting mean toxicity as measured in laboratory tests for Great Lakes sediments containing complex mixtures of environmental contaminants (e.g., polychlorinated biphenyls, polycyclic aromatic hydrocarbons, pesticides, chlorinated dioxins, and metals). The approach incorporates equilibrium partitioning and organic carbon control of bioavailability for organic contaminants and acid volatile sulfide (AVS) control for metals, and includes toxic equivalency for planar organic chemicals. A toxic unit is defined as the ratio of the estimated pore-water concentration of a contaminant to the chronic toxicity of that contaminant, as estimated by U.S. Environmental Protection Agency Ambient Water Quality Criteria (AWQC). The toxic unit models we developed assume complete additivity of contaminant effects, are completely mechanistic in form, and were evaluated without any a posteriori modification of either the models or the data from which the models were developed and against which they were tested. A linear relationship between total toxic units, which included toxicity attributable to both iron and un-ionized ammonia, accounted for about 88% of observed variability in mean toxicity; a quadratic relationship accounted for almost 94%. Exclusion of either bioavailability components (i.e., equilibrium partitioning control of organic contaminants and AVS control of metals) or iron from the model substantially decreased its ability to predict mean toxicity. A model based solely on un-ionized ammonia accounted for about 47% of the variability in mean toxicity. We found the toxic unit approach to be a viable method for assessing and ranking the relative potential toxicity of contaminated sediments.
Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong
2015-01-01
INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi-parametric Dirichlet process mixture (DPM) of normal distributions for three individual VOCs (chloroform, 1,4-DCB, and styrene). Goodness of fit for these full distribution models was also evaluated using simulated data. Specific Aim 2 Mixtures in the RIOPA VOC data set were identified using positive matrix factorization (PMF) and by toxicologic mode of action. Dependency structures of a mixture’s components were examined using mixture fractions and were modeled using copulas, which address correlations of multiple components across their entire distributions. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) were evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks were calculated for mixtures, and results from copulas and multivariate lognormal models were compared with risks based on RIOPA observations. Specific Aim 3 Exposure determinants were identified using stepwise regressions and linear mixed-effects models (LMMs). RESULTS Specific Aim 1 Extreme value exposures in RIOPA typically were best fitted by three-parameter generalized extreme value (GEV) distributions, and sometimes by the two-parameter Gumbel distribution. In contrast, lognormal distributions significantly underestimated both the level and likelihood of extreme values. Among the VOCs measured in RIOPA, 1,4-dichlorobenzene (1,4-DCB) was associated with the greatest cancer risks; for example, for the highest 10% of measurements of 1,4-DCB, all individuals had risk levels above 10−4, and 13% of all participants had risk levels above 10−2. Of the full-distribution models, the finite mixture of normal distributions with two to four clusters and the DPM of normal distributions had superior performance in comparison with the lognormal models. DPM distributions provided slightly better fit than the finite mixture distributions; the advantages of the DPM model were avoiding certain convergence issues associated with the finite mixture distributions, adaptively selecting the number of needed clusters, and providing uncertainty estimates. Although the results apply to the RIOPA data set, GEV distributions and mixture models appear more broadly applicable. These models can be used to simulate VOC distributions, which are neither normally nor lognormally distributed, and they accurately represent the highest exposures, which may have the greatest health significance. Specific Aim 2 Four VOC mixtures were identified and apportioned by PMF; they represented gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection byproducts, and cleaning products and odorants. The last mixture (cleaning products and odorants) accounted for the largest fraction of an individual’s total exposure (average of 42% across RIOPA participants). Often, a single compound dominated a mixture but the mixture fractions were heterogeneous; that is, the fractions of the compounds changed with the concentration of the mixture. Three VOC mixtures were identified by toxicologic mode of action and represented VOCs associated with hematopoietic, liver, and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. The dependency structures of the VOC mixtures in the RIOPA data set fitted Gumbel (two mixtures) and t copulas (four mixtures). These copula types emphasize dependencies found in the upper and lower tails of a distribution. The copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy and performed better than multivariate lognormal distributions. Specific Aim 3 In an analysis focused on the home environment and the outdoor (close to home) environment, home VOC concentrations dominated personal exposures (66% to 78% of the total exposure, depending on VOC); this was largely the result of the amount of time participants spent at home and the fact that indoor concentrations were much higher than outdoor concentrations for most VOCs. In a different analysis focused on the sources inside the home and outside (but close to the home), it was assumed that 100% of VOCs from outside sources would penetrate the home. Outdoor VOC sources accounted for 5% (d-limonene) to 81% (carbon tetrachloride [CTC]) of the total exposure. Personal exposure and indoor measurements had similar determinants depending on the VOC. Gasoline-related VOCs (e.g., benzene and methyl tert-butyl ether [MTBE]) were associated with city, residences with attached garages, pumping gas, wind speed, and home air exchange rate (AER). Odorant and cleaning-related VOCs (e.g., 1,4-DCB and chloroform) also were associated with city, and a residence’s AER, size, and family members showering. Dry-cleaning and industry-related VOCs (e.g., tetrachloroethylene [or perchloroethylene, PERC] and trichloroethylene [TCE]) were associated with city, type of water supply to the home, and visits to the dry cleaner. These and other relationships were significant, they explained from 10% to 40% of the variance in the measurements, and are consistent with known emission sources and those reported in the literature. Outdoor concentrations of VOCs had only two determinants in common: city and wind speed. Overall, personal exposure was dominated by the home setting, although a large fraction of indoor VOC concentrations were due to outdoor sources. City of residence, personal activities, household characteristics, and meteorology were significant determinants. Concentrations in RIOPA were considerably lower than levels in the nationally representative NHANES for all VOCs except MTBE and 1,4-DCB. Differences between RIOPA and NHANES results can be explained by contrasts between the sampling designs and staging in the two studies, and by differences in the demographics, smoking, employment, occupations, and home locations. A portion of these differences are due to the nature of the convenience (RIOPA) and representative (NHANES) sampling strategies used in the two studies. CONCLUSIONS Accurate models for exposure data, which can feature extreme values, multiple modes, data below the MDL, heterogeneous interpollutant dependency structures, and other complex characteristics, are needed to estimate exposures and risks and to develop control and management guidelines and policies. Conventional and novel statistical methods were applied to data drawn from two large studies to understand the nature and significance of VOC exposures. Both extreme value distributions and mixture models were found to provide excellent fit to single VOC compounds (univariate distributions), and copulas may be the method of choice for VOC mixtures (multivariate distributions), especially for the highest exposures, which fit parametric models poorly and which may represent the greatest health risk. The identification of exposure determinants, including the influence of both certain activities (e.g., pumping gas) and environments (e.g., residences), provides information that can be used to manage and reduce exposures. The results obtained using the RIOPA data set add to our understanding of VOC exposures and further investigations using a more representative population and a wider suite of VOCs are suggested to extend and generalize results. PMID:25145040
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María
2009-02-01
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.
Solvation of decane and benzene in mixtures of 1-octanol and N, N-dimethylformamide
NASA Astrophysics Data System (ADS)
Kustov, A. V.; Smirnova, N. L.
2016-09-01
The heats of dissolution of decane and benzene in a model system of octanol-1 (OctOH) and N, N-dimethylformamide (DMF) at 308 K are measured using a variable temperature calorimeter equipped with an isothermal shell. Standard enthalpies are determined and standard heat capacities of dissolution in the temperature range of 298-318 K are calculated using data obtained in [1, 2]. The state of hydrocarbon molecules in a binary mixture is studied in terms of the enhanced coordination model (ECM). Benzene is shown to be preferentially solvated by DMF over the range of physiological temperatures. The solvation shell of decane is found to be strongly enriched with 1-octanol. It is obvious that although both hydrocarbons are nonpolar, the presence of the aromatic π-system in benzene leads to drastic differences in their solvation in a lipid-protein medium.
Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca
2016-01-01
The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294
Pulley, Simon; Foster, Ian; Collins, Adrian L
2017-06-01
The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis based classification methods have the potential to reduce composite uncertainty significantly in future source tracing studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Characterization of a nose-only inhalation exposure system for hydrocarbon mixtures and jet fuels.
Martin, Sheppard A; Tremblay, Raphael T; Brunson, Kristyn F; Kendrick, Christine; Fisher, Jeffrey W
2010-04-01
A directed-flow nose-only inhalation exposure system was constructed to support development of physiologically based pharmacokinetic (PBPK) models for complex hydrocarbon mixtures, such as jet fuels. Due to the complex nature of the aerosol and vapor-phase hydrocarbon exposures, care was taken to investigate the chamber hydrocarbon stability, vapor and aerosol droplet compositions, and droplet size distribution. Two-generation systems for aerosolizing fuel and hydrocarbons were compared and characterized for use with either jet fuels or a simple mixture of eight hydrocarbons. Total hydrocarbon concentration was monitored via online gas chromatography (GC). Aerosol/vapor (A/V) ratios, and total and individual hydrocarbon concentrations, were determined using adsorbent tubes analyzed by thermal desorption-gas chromatography-mass spectrometry (TDS-GC-MS). Droplet size distribution was assessed via seven-stage cascade impactor. Droplet mass median aerodynamic diameter (MMAD) was between 1 and 3 mum, depending on the generator and mixture utilized. A/V hydrocarbon concentrations ranged from approximately 200 to 1300 mg/m(3), with between 20% and 80% aerosol content, depending on the mixture. The aerosolized hydrocarbon mixtures remained stable during the 4-h exposure periods, with coefficients of variation (CV) of less than 10% for the total hydrocarbon concentrations. There was greater variability in the measurement of individual hydrocarbons in the A-V phase. In conclusion, modern analytical chemistry instruments allow for improved descriptions of inhalation exposures of rodents to aerosolized fuel.
Zhang, Jingyang; Chaloner, Kathryn; McLinden, James H.; Stapleton, Jack T.
2013-01-01
Reconciling two quantitative ELISA tests for an antibody to an RNA virus, in a situation without a gold standard and where false negatives may occur, is the motivation for this work. False negatives occur when access of the antibody to the binding site is blocked. Based on the mechanism of the assay, a mixture of four bivariate normal distributions is proposed with the mixture probabilities depending on a two-stage latent variable model including the prevalence of the antibody in the population and the probabilities of blocking on each test. There is prior information on the prevalence of the antibody, and also on the probability of false negatives, and so a Bayesian analysis is used. The dependence between the two tests is modeled to be consistent with the biological mechanism. Bayesian decision theory is utilized for classification. The proposed method is applied to the motivating data set to classify the data into two groups: those with and those without the antibody. Simulation studies describe the properties of the estimation and the classification. Sensitivity to the choice of the prior distribution is also addressed by simulation. The same model with two levels of latent variables is applicable in other testing procedures such as quantitative polymerase chain reaction tests where false negatives occur when there is a mutation in the primer sequence. PMID:23592433
Breakdown and Limit of Continuum Diffusion Velocity for Binary Gas Mixtures from Direct Simulation
NASA Astrophysics Data System (ADS)
Martin, Robert Scott; Najmabadi, Farrokh
2011-05-01
This work investigates the breakdown of the continuum relations for diffusion velocity in inert binary gas mixtures. Values of the relative diffusion velocities for components of a gas mixture may be calculated using of Chapman-Enskog theory and occur not only due to concentration gradients, but also pressure and temperature gradients in the flow as described by Hirschfelder. Because Chapman-Enskog theory employs a linear perturbation around equilibrium, it is expected to break down when the velocity distribution deviates significantly from equilibrium. This breakdown of the overall flow has long been an area of interest in rarefied gas dynamics. By comparing the continuum values to results from Bird's DS2V Monte Carlo code, we propose a new limit on the continuum approach specific to binary gases. To remove the confounding influence of an inconsistent molecular model, we also present the application of the variable hard sphere (VSS) model used in DS2V to the continuum diffusion velocity calculation. Fitting sample asymptotic curves to the breakdown, a limit, Vmax, that is a fraction of an analytically derived limit resulting from the kinetic temperature of the mixture is proposed. With an expected deviation of only 2% between the physical values and continuum calculations within ±Vmax/4, we suggest this as a conservative estimate on the range of applicability for the continuum theory.
Erickson, Marilyn C; Liao, Jean; Jiang, Xiuping; Doyle, Michael P
2014-11-01
Two separate studies were conducted to address the condition and the type of feedstocks used during composting of dairy manure. In each study, physical (temperature), chemical (ammonia, volatile acids, and pH), and biological (Salmonella, Listeria monocytogenes, and Escherichia coli O157:H7) parameters were monitored during composting in bioreactors to assess the degree to which they were affected by the experimental variables and, ultimately, the ability of the chemical and physical parameters to predict the fate of pathogens during composting. Compost mixtures that contained either aged dairy manure or pine needles had reduced heat generation; therefore, pathogen reduction took longer than if fresh manure or carbon amendments of wheat straw or peanut hulls were used. Based on regression models derived from these results, ammonia concentration, in addition to heat, were the primary factors affecting the degree of pathogen inactivation in compost mixtures formulated to an initial carbon-nitrogen (C:N) ratio of 40:1, whereas, the pH of the compost mixture along with the amount of heat exposure were most influential in compost mixtures formulated to an initial C:N ratio of 30:1. Further studies are needed to validate these models so that additional criteria in addition to time and temperature can be used to evaluate the microbiological safety of composted manures.
Barata, Carlos; Markich, Scott J; Baird, Donald J; Taylor, Graeme; Soares, Amadeu M V M
2002-10-02
To date, studies on genetic variability in the tolerance of aquatic biota to chemicals have focused on exposure to single chemicals. In the field, metals occur as elemental mixtures, and thus it is essential to study whether the genetic consequences of exposure to such mixtures differs from response to single chemicals. This study determined the feeding responses of three Daphnia magna Straus clones exposed to Cd and Zn, both individually and as mixtures. Tolerance to mixtures of Cd and Zn was expressed as the proportional feeding depression of D. magna to Cd at increasing zinc concentrations. A quantitative genetic analysis revealed that genotype and genotype x environmental factors governed population responses to mixtures of both metals. More specifically, genetic variation in tolerance to sublethal levels of Cd decreased at those Zn concentrations where there were no effects on feeding, and increased again at Zn concentrations that affected feeding. The existence of genotype x environmental interactions indicated that the genetic consequences of exposing D. magna to mixtures of Cd and Zn cannot be predicted from the animals' response to single metals alone. Therefore, current ecological risk assessment methodologies for predicting the effects of chemical mixtures may wish to incorporate the concept of genetic variability. Furthermore, exposure to low and moderate concentrations of Zn increased the sublethal tolerance to Cd. This induction of tolerance to Cd by Zn was also observed for D. magna fed algae pre-loaded with both metals. Furthermore, in only one clone, physiological acclimatization to zinc also induced tolerance to cadmium. These results suggest that the feeding responses of D. magna may be related to gut poisoning induced by the release of metals from algae under low pH conditions. In particular, both induction of metallothionein synthesis by Zn and competition between Zn and Cd ions for uptake at target sites on the gut wall may be involved in determining sublethal responses to mixtures of both metals.
Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction
NASA Astrophysics Data System (ADS)
Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob
2018-04-01
Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.
NASA Technical Reports Server (NTRS)
Hallidy, William H. (Inventor); Chin, Robert C. (Inventor)
1999-01-01
The present invention is a system for chemometric analysis for the extraction of the individual component fluorescence spectra and fluorescence lifetimes from a target mixture. The present invention combines a processor with an apparatus for generating an excitation signal to transmit at a target mixture and an apparatus for detecting the emitted signal from the target mixture. The present invention extracts the individual fluorescence spectrum and fluorescence lifetime measurements from the frequency and wavelength data acquired from the emitted signal. The present invention uses an iterative solution that first requires the initialization of several decision variables and the initial approximation determinations of intermediate matrices. The iterative solution compares the decision variables for convergence to see if further approximation determinations are necessary. If the solution converges, the present invention then determines the reduced best fit error for the analysis of the individual fluorescence lifetime and the fluorescence spectrum before extracting the individual fluorescence lifetime and fluorescence spectrum from the emitted signal of the target mixture.
Experimentally Derived Mechanical and Flow Properties of Fine-grained Soil Mixtures
NASA Astrophysics Data System (ADS)
Schneider, J.; Peets, C. S.; Flemings, P. B.; Day-Stirrat, R. J.; Germaine, J. T.
2009-12-01
As silt content in mudrocks increases, compressibility linearly decreases and permeability exponentially increases. We prepared mixtures of natural Boston Blue Clay (BBC) and synthetic silt in the ratios of 100:0, 86:14, 68:32, and 50:50, respectively. To recreate natural conditions yet remove variability and soil disturbance, we resedimented all mixtures to a total stress of 100 kPa. We then loaded them to approximately 2.3 MPa in a CRS (constant-rate-of-strain) uniaxial consolidation device. The analyses show that the higher the silt content in the mixture, the stiffer the material is. Compression index as well as liquid and plastic limits linearly decrease with increasing silt content. Vertical permeability increases exponentially with porosity as well as with silt content. Fabric alignment determined through High Resolution X-ray Texture Goniometry (HRXTG) expressed as maximum pole density (m.r.d.) decreases with silt content at a given stress. However, this relationship is not linear instead there are two clusters: the mixtures with higher clay contents (100:0, 84:16) have m.r.d. around 3.9 and mixtures with higher silt contents (68:32, 50:50) have m.r.d. around 2.5. Specific surface area (SSA) measurements show a positive correlation to the total clay content. The amount of silt added to the clay reduces specific surface area, grain orientation, and fabric alignment; thus, it affects compression and fluid flow behavior on a micro- and macroscale. Our results are comparable with previous studies such as kaolinite / silt mixtures (Konrad & Samson [2000], Wagg & Konrad [1990]). We are studying this behavior to understand how fine-grained rocks consolidate. This problem is important to practical and fundamental programs. For example, these sediments can potentially act as either a tight gas reservoir or a seal for hydrocarbons or geologic storage of CO2. This study also provides a systematic approach for developing models of permeability and compressibility behavior needed as inputs for basin modeling.
Caenorhabditis elegans Pheromones Regulate Multiple Complex Behaviors
Edison, Arthur S.
2009-01-01
Summary of recent advances A family of small molecules called ascarosides act as pheromones to control multiple behaviors in the nematode Caenorhabditis elegans. At picomolar concentrations, a synergistic mixture of at least three ascarosides produced by hermaphrodites causes male-specific attraction. At higher concentrations, the same ascarosides, perhaps in a different mixture, induce the developmentally arrested stage known as dauer. The production of ascarosides is strongly dependent on environmental conditions, although relatively little is known about the major variables and mechanisms of their regulation. Thus, male mating and dauer formation are linked through a common set of small molecules whose expression is sensitive to a given microenvironment, suggesting a model by which ascarosides regulate the overall life cycle of C. elegans. PMID:19665885
Cocchi, Marina; Manfredini, Matteo; Marchetti, Andrea; Pigani, Laura; Seeber, Renato; Tassi, Lorenzo; Ulrici, Alessandro; Vignali, Moris; Zanardi, Chiara; Zannini, Paolo
2002-03-01
Measurements of the refractive index n for the binary mixtures 2-chloroethanol + 2-methoxyethanol in the 0 < or = t/degree C < or = 70 temperature range have been carried out with the purpose of checking the capability of empirical models to express physical quantity as a function of temperature and volume fraction, both separately and together, i.e., in a two independent variables expression. Furthermore, the experimental data have been used to calculate excess properties such as the excess refractive index, the excess molar refraction, and the excess Kirkwood parameter delta g over the whole composition range. The quantities obtained have been discussed and interpreted in terms of the type and nature of the specific intermolecular interactions between the components.
NASA Astrophysics Data System (ADS)
Zamuraev, V. P.; Kalinina, A. P.
2018-03-01
The paper presents the results of numerical modeling of a transonic region formation in the flat channel. Hydrogen flows into the channel through the holes in the wall. The jet of compressed air is localized downstream the holes. The transonic region formation is formed by the burning of heterogeneous hydrogen-air mixture. It was considered in the framework of the simplified chemical kinetics. The interesting feature of the regime obtained is the following: the distribution of the Mach numbers is qualitatively similar to the case of pulse-periodic energy sources. This mode is a favorable prerequisite for the effective fuel combustion in the expanding part of the channel when injecting fuel into this part.
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Cosmological tachyon condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilic, Neven; Tupper, Gary B.; Viollier, Raoul D.
2009-07-15
We consider the prospects for dark matter/energy unification in k-essence type cosmologies. General mappings are established between the k-essence scalar field, the hydrodynamic and braneworld descriptions. We develop an extension of the general relativistic dust model that incorporates the effects of both pressure and the associated acoustic horizon. Applying this to a tachyon model, we show that this inhomogeneous 'variable Chaplygin gas' does evolve into a mixed system containing cold dark matter like gravitational condensate in significant quantities. Our methods can be applied to any dark energy model, as well as to mixtures of dark energy and traditional dark matter.
Optimized mixed Markov models for motif identification
Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping
2006-01-01
Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929
Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Fu, Xiaping; Baek, Insuck; Cho, Byoung-Kwan
2016-05-01
Illegal use of nitrogen-rich melamine (C3H6N6) to boost perceived protein content of food products such as milk, infant formula, frozen yogurt, pet food, biscuits, and coffee drinks has caused serious food safety problems. Conventional methods to detect melamine in foods, such as Enzyme-linked immunosorbent assay (ELISA), High-performance liquid chromatography (HPLC), and Gas chromatography-mass spectrometry (GC-MS), are sensitive but they are time-consuming, expensive, and labor-intensive. In this research, near-infrared (NIR) hyperspectral imaging technique combined with regression coefficient of partial least squares regression (PLSR) model was used to detect melamine particles in milk powders easily and quickly. NIR hyperspectral reflectance imaging data in the spectral range of 990-1700nm were acquired from melamine-milk powder mixture samples prepared at various concentrations ranging from 0.02% to 1%. PLSR models were developed to correlate the spectral data (independent variables) with melamine concentration (dependent variables) in melamine-milk powder mixture samples. PLSR models applying various pretreatment methods were used to reconstruct the two-dimensional PLS images. PLS images were converted to the binary images to detect the suspected melamine pixels in milk powder. As the melamine concentration was increased, the numbers of suspected melamine pixels of binary images were also increased. These results suggested that NIR hyperspectral imaging technique and the PLSR model can be regarded as an effective tool to detect melamine particles in milk powders. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Ratcliff, Roger
2012-01-01
Koen and Yonelinas (2010; K&Y) reported that mixing classes of targets that had short (weak) or long (strong) study times had no impact on zROC slope, contradicting the predictions of the encoding variability hypothesis. We show that they actually derived their predictions from a mixture unequal-variance signal detection (UVSD) model, which…
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duraes, L.; Portugal, A.; Plaksin, I.
2009-12-28
In this work, the radial combustion in thin circular samples of stoichiometric and over aluminized Fe{sub 2}O{sub 3}/Al mixtures is studied. Two confinement materials are tested: stainless steel and PVC. The combustion front profiles are registered by digital video-crono-photography. The radial geometry allows an easy detection of sample heterogeneities, via the circularity distortions of the combustion front profiles. The influence of the Al content in the mixtures and the type of confinement on the combustion propagation dynamics is analyzed. Additionally, an asymmetry parameter of the combustion front profiles is defined and statistically treated via ANOVA. Although the type of confinementmore » contributes more than the mixture composition to the variability of the asymmetry parameter, they both have a weak influence. The main source of variability is the intrinsic variations of the samples, which are due to their heterogeneous character.« less
A Two-length Scale Turbulence Model for Single-phase Multi-fluid Mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, J. D.; Livescu, D.; Baltzer, J. R.
2015-09-08
A two-length scale, second moment turbulence model (Reynolds averaged Navier-Stokes, RANS) is proposed to capture a wide variety of single-phase flows, spanning from incompressible flows with single fluids and mixtures of different density fluids (variable density flows) to flows over shock waves. The two-length scale model was developed to address an inconsistency present in the single-length scale models, e.g. the inability to match both variable density homogeneous Rayleigh-Taylor turbulence and Rayleigh-Taylor induced turbulence, as well as the inability to match both homogeneous shear and free shear flows. The two-length scale model focuses on separating the decay and transport length scales,more » as the two physical processes are generally different in inhomogeneous turbulence. This allows reasonable comparisons with statistics and spreading rates over such a wide range of turbulent flows using a common set of model coefficients. The specific canonical flows considered for calibrating the model include homogeneous shear, single-phase incompressible shear driven turbulence, variable density homogeneous Rayleigh-Taylor turbulence, Rayleigh-Taylor induced turbulence, and shocked isotropic turbulence. The second moment model shows to compare reasonably well with direct numerical simulations (DNS), experiments, and theory in most cases. The model was then applied to variable density shear layer and shock tube data and shows to be in reasonable agreement with DNS and experiments. Additionally, the importance of using DNS to calibrate and assess RANS type turbulence models is highlighted.« less
Míguez, J M; Piñeiro, M M; Algaba, J; Mendiboure, B; Torré, J P; Blas, F J
2015-11-05
The high-pressure phase diagrams of the tetrahydrofuran(1) + carbon dioxide(2), + methane(2), and + water(2) mixtures are examined using the SAFT-VR approach. Carbon dioxide molecule is modeled as two spherical segments tangentially bonded, water is modeled as a spherical segment with four associating sites to represent the hydrogen bonding, methane is represented as an isolated sphere, and tetrahydrofuran is represented as a chain of m tangentially bonded spherical segments. Dispersive interactions are modeled using the square-well intermolecular potential. In addition, two different molecular model mixtures are developed to take into account the subtle balance between water-tetrahydrofuran hydrogen-bonding interactions. The polar and quadrupolar interactions present in water, tetrahydrofuran, and carbon dioxide are treated in an effective way via square-well potentials of variable range. The optimized intermolecular parameters are taken from the works of Giner et al. (Fluid Phase Equil. 2007, 255, 200), Galindo and Blas (J. Phys. Chem. B 2002, 106, 4503), Patel et al. (Ind. Eng. Chem. Res. 2003, 42, 3809), and Clark et al. (Mol. Phys. 2006, 104, 3561) for tetrahydrofuran, carbon dioxide, methane, and water, respectively. The phase diagrams of the binary mixtures exhibit different types of phase behavior according to the classification of van Konynenburg and Scott, ranging from types I, III, and VI phase behavior for the tetrahydrofuran(1) + carbon dioxide(2), + methane(2), and + water(2) binary mixtures, respectively. This last type is characterized by the presence of a Bancroft point, positive azeotropy, and the so-called closed-loop curves that represent regions of liquid-liquid immiscibility in the phase diagram. The system exhibits lower critical solution temperatures (LCSTs), which denote the lower limit of immiscibility together with upper critical solution temperatures (UCSTs). This behavior is explained in terms of competition between the incompatibility with the alkyl parts of the tetrahydrofuran ring chain and the hydrogen bonding between water and the ether group. A minimum number of unlike interaction parameters are fitted to give the optimal representation of the most representative features of the binary phase diagrams. In the particular case of tetrahydrofuran(1) + water(2), two sets of intermolecular potential model parameters are proposed to describe accurately either the hypercritical point associated with the closed-loop liquid-liquid immiscibility region or the location of the mixture lower- and upper-critical end-points. The theory is not only able to predict the type of phase behavior of each mixture, but also provides a reasonably good description of the global phase behavior whenever experimental data are available.
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
A comparison of spectral mixture analysis an NDVI for ascertaining ecological variables
NASA Technical Reports Server (NTRS)
Wessman, Carol A.; Bateson, C. Ann; Curtiss, Brian; Benning, Tracy L.
1993-01-01
In this study, we compare the performance of spectral mixture analysis to the Normalized Difference Vegetation Index (NDVI) in detecting change in a grassland across topographically-induced nutrient gradients and different management schemes. The Konza Prairie Research Natural Area, Kansas, is a relatively homogeneous tallgrass prairie in which change in vegetation productivity occurs with respect to topographic positions in each watershed. The area is the site of long-term studies of the influence of fire and grazing on tallgrass production and was the site of the First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment (FIFE) from 1987 to 1989. Vegetation indices such as NDVI are commonly used with imagery collected in few (less than 10) spectral bands. However, the use of only two bands (e.g. NDVI) does not adequately account for the complex of signals making up most surface reflectance. Influences from background spectral variation and spatial heterogeneity may confound the direct relationship with biological or biophysical variables. High dimensional multispectral data allows for the application position of techniques such as derivative analysis and spectral curve fitting, thereby increasing the probability of successfully modeling the reflectance from mixed surfaces. The higher number of bands permits unmixing of a greater number of surface components, separating the vegetation signal for further analyses relevant to biological variables.
Solubility and bioavailability improvement of pazopanib hydrochloride.
Herbrink, Maikel; Groenland, Stefanie L; Huitema, Alwin D R; Schellens, Jan H M; Beijnen, Jos H; Steeghs, Neeltje; Nuijen, Bastiaan
2018-06-10
The anti-cancer drug pazopanib hydrochloride (PZH) has a very low aqueous solubility and a variable oral bioavailability. A new pharmaceutical formulation with an improved solubility may enhance the bioavailability and reduce the variability. A broad selection of polymer excipients was tested for their compatibility and solubilizing properties by conventional microscopic, thermal and spectrometric techniques. A wet milling and mixing technique was used to produce homogenous powder mixtures. The dissolution properties of the formulation were tested by a pH-switch dissolution model. The final formulation was tested in vivo in cancer patient following a dose escalation design. Of the tested mixture formulations, the one containing the co-block polymer Soluplus® in a 8:1 ratio with PZH performed best in terms of in vitro dissolution properties. The in vivo results indicated that 300 mg of the developed formulation yields similar exposure and a lower variability (379 μg/mL∗h (36.7% CV)) than previously reported values for the standard PZH formulation (Votrient®) at the approved dose of 800 mg. Furthermore, the expected plasma-C through levels (27.2 μg/mL) exceeds the defined therapeutic efficacy threshold of 20 μg/mL. Copyright © 2018 Elsevier B.V. All rights reserved.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Hoang, Nhat-Duc
2017-09-01
In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.
Influence of oil type on the amounts of acrylamide generated in a model system and in French fries.
Mestdagh, Frédéric J; De Meulenaer, Bruno; Van Poucke, Christof; Detavernier, Christ'l; Cromphout, Caroline; Van Peteghem, Carlos
2005-07-27
Acrylamide formation was studied by use of a new heating methodology, based on a closed stainless steel tubular reactor. Different artificial potato powder mixtures were homogenized and subsequently heated in the reactor. This procedure was first tested for its repeatability. By use of this experimental setup, it was possible to study the acrylamide formation mechanism in the different mixtures, eliminating some variable physical and chemical factors during the frying process, such as heat flux and water evaporation from and oil ingress into the food. As a first application of this optimized heating concept, the influence on acrylamide formation of the type of deep-frying oil was investigated. The results obtained from the experiments with the tubular reactor were compared with standardized French fry preparation tests. In both cases, no significant difference in acrylamide formation could be found between the various heating oils applied. Consequently, the origin of the deep-frying vegetable oils did not seem to affect the acrylamide formation in potatoes during frying. Surprisingly however, when artificial mixtures did not contain vegetable oil, significantly lower concentrations of acrylamide were detected, compared to oil-containing mixtures.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Statistical modeling of natural backgrounds in hyperspectral LWIR data
NASA Astrophysics Data System (ADS)
Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph
2016-09-01
Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.
Jović, Ozren; Smrečki, Neven; Popović, Zora
2016-04-01
A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.
Shah, Nirmal; Seth, Avinashkumar; Balaraman, R; Sailor, Girish; Javia, Ankur; Gohil, Dipti
2018-04-01
The objective of this work was to utilize a potential of microemulsion for the improvement in oral bioavailability of raloxifene hydrochloride, a BCS class-II drug with 2% bioavailability. Drug-loaded microemulsion was prepared by water titration method using Capmul MCM C8, Tween 20, and Polyethylene glycol 400 as oil, surfactant, and co-surfactant respectively. The pseudo-ternary phase diagram was constructed between oil and surfactants mixture to obtain appropriate components and their concentration ranges that result in large existence area of microemulsion. D-optimal mixture design was utilized as a statistical tool for optimization of microemulsion considering oil, S mix , and water as independent variables with percentage transmittance and globule size as dependent variables. The optimized formulation showed 100 ± 0.1% transmittance and 17.85 ± 2.78 nm globule size which was identically equal with the predicted values of dependent variables given by the design expert software. The optimized microemulsion showed pronounced enhancement in release rate compared to plain drug suspension following diffusion controlled release mechanism by the Higuchi model. The formulation showed zeta potential of value -5.88 ± 1.14 mV that imparts good stability to drug loaded microemulsion dispersion. Surface morphology study with transmission electron microscope showed discrete spherical nano sized globules with smooth surface. In-vivo pharmacokinetic study of optimized microemulsion formulation in Wistar rats showed 4.29-fold enhancements in bioavailability. Stability study showed adequate results for various parameters checked up to six months. These results reveal the potential of microemulsion for significant improvement in oral bioavailability of poorly soluble raloxifene hydrochloride.
Variable Screening for Cluster Analysis.
ERIC Educational Resources Information Center
Donoghue, John R.
Inclusion of irrelevant variables in a cluster analysis adversely affects subgroup recovery. This paper examines using moment-based statistics to screen variables; only variables that pass the screening are then used in clustering. Normal mixtures are analytically shown often to possess negative kurtosis. Two related measures, "m" and…
Mating compatibility in the parasitic protist Trypanosoma brucei.
Peacock, Lori; Ferris, Vanessa; Bailey, Mick; Gibson, Wendy
2014-02-21
Genetic exchange has been described in several kinetoplastid parasites, but the most well-studied mating system is that of Trypanosoma brucei, the causative organism of African sleeping sickness. Sexual reproduction takes place in the salivary glands (SG) of the tsetse vector and involves meiosis and production of haploid gametes. Few genetic crosses have been carried out to date and consequently there is little information about the mating compatibility of different trypanosomes. In other single-celled eukaryotes, mating compatibility is typically determined by a system of two or more mating types (MT). Here we investigated the MT system in T. brucei. We analysed a large series of F1, F2 and back crosses by pairwise co-transmission of red and green fluorescent cloned cell lines through experimental tsetse flies. To analyse each cross, trypanosomes were cloned from fly SG containing a mixture of both parents, and genotyped by microsatellites and molecular karyotype. To investigate mating compatibility at the level of individual cells, we directly observed the behaviour of SG-derived gametes in intra- or interclonal mixtures of red and green fluorescent trypanosomes ex vivo. Hybrid progeny were found in all F1 and F2 crosses and most of the back crosses. The success of individual crosses was highly variable as judged by the number of hybrid clones produced, suggesting a range of mating compatibilities among F1 progeny. As well as hybrids, large numbers of recombinant genotypes resulting from intraclonal mating (selfers) were found in some crosses. In ex vivo mixtures, red and green fluorescent trypanosome gametes were observed to pair up and interact via their flagella in both inter- and intraclonal combinations. While yellow hybrid trypanosomes were frequently observed in interclonal mixtures, such evidence of cytoplasmic exchange was rare in the intraclonal mixtures. The outcomes of individual crosses, particularly back crosses, were variable in numbers of both hybrid and selfer clones produced, and do not readily fit a simple two MT model. From comparison of the behaviour of trypanosome gametes in inter- and intraclonal mixtures, we infer that mating compatibility is controlled at the level of gamete fusion.
Mating compatibility in the parasitic protist Trypanosoma brucei
2014-01-01
Background Genetic exchange has been described in several kinetoplastid parasites, but the most well-studied mating system is that of Trypanosoma brucei, the causative organism of African sleeping sickness. Sexual reproduction takes place in the salivary glands (SG) of the tsetse vector and involves meiosis and production of haploid gametes. Few genetic crosses have been carried out to date and consequently there is little information about the mating compatibility of different trypanosomes. In other single-celled eukaryotes, mating compatibility is typically determined by a system of two or more mating types (MT). Here we investigated the MT system in T. brucei. Methods We analysed a large series of F1, F2 and back crosses by pairwise co-transmission of red and green fluorescent cloned cell lines through experimental tsetse flies. To analyse each cross, trypanosomes were cloned from fly SG containing a mixture of both parents, and genotyped by microsatellites and molecular karyotype. To investigate mating compatibility at the level of individual cells, we directly observed the behaviour of SG-derived gametes in intra- or interclonal mixtures of red and green fluorescent trypanosomes ex vivo. Results Hybrid progeny were found in all F1 and F2 crosses and most of the back crosses. The success of individual crosses was highly variable as judged by the number of hybrid clones produced, suggesting a range of mating compatibilities among F1 progeny. As well as hybrids, large numbers of recombinant genotypes resulting from intraclonal mating (selfers) were found in some crosses. In ex vivo mixtures, red and green fluorescent trypanosome gametes were observed to pair up and interact via their flagella in both inter- and intraclonal combinations. While yellow hybrid trypanosomes were frequently observed in interclonal mixtures, such evidence of cytoplasmic exchange was rare in the intraclonal mixtures. Conclusions The outcomes of individual crosses, particularly back crosses, were variable in numbers of both hybrid and selfer clones produced, and do not readily fit a simple two MT model. From comparison of the behaviour of trypanosome gametes in inter- and intraclonal mixtures, we infer that mating compatibility is controlled at the level of gamete fusion. PMID:24559099
Two-Phase Flow Model and Experimental Validation for Bubble Augmented Waterjet Propulsion Nozzle
NASA Astrophysics Data System (ADS)
Choi, J.-K.; Hsiao, C.-T.; Wu, X.; Singh, S.; Jayaprakash, A.; Chahine, G.
2011-11-01
The concept of thrust augmentation through bubble injection into a waterjet has been the subject of many patents and publications over the past several decades, and there are simplified computational and experimental evidence of thrust increase. In this work, we present more rigorous numerical and experimental studies which aim at investigating two-phase water jet propulsion systems. The numerical model is based on a Lagrangian-Eulerian method, which considers the bubbly mixture flow both in the microscopic level where individual bubble dynamics are tracked and in the macroscopic level where bubbles are collectively described by the local void fraction of the mixture. DYNAFLOW's unsteady RANS solver, 3DYNAFS-Vis is used to solve the macro level variable density mixture medium, and a fully unsteady two-way coupling between this and the bubble dynamics/tracking code 3DYNAFS-DSM is utilized. Validation studies using measurements in a half 3-D experimental setup composed of divergent and convergent sections are presented. Visualization of the bubbles, PIV measurements of the flow, bubble size and behavior are observed, and the measured flow field data are used to validate the models. Thrust augmentation as high as 50% could be confirmed both by predictions and by experiments. This work was supported by the Office of Naval Research under the contract N00014-07-C-0427, monitored by Dr. Ki-Han Kim.
Numerical Modeling of Cavitating Venturi: A Flow Control Element of Propulsion System
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Saxon, Jeff (Technical Monitor)
2002-01-01
In a propulsion system, the propellant flow and mixture ratio could be controlled either by variable area flow control valves or by passive flow control elements such as cavitating venturies. Cavitating venturies maintain constant propellant flowrate for fixed inlet conditions (pressure and temperature) and wide range of outlet pressures, thereby maintain constant, engine thrust and mixture ratio. The flowrate through the venturi reaches a constant value and becomes independent of outlet pressure when the pressure at throat becomes equal to vapor pressure. In order to develop a numerical model of propulsion system, it is necessary to model cavitating venturies in propellant feed systems. This paper presents a finite volume model of flow network of a cavitating venturi. The venturi was discretized into a number of control volumes and mass, momentum and energy conservation equations in each control volume are simultaneously solved to calculate one-dimensional pressure, density, and flowrate and temperature distribution. The numerical model predicts cavitations at the throat when outlet pressure was gradually reduced. Once cavitation starts, with further reduction of downstream pressure, no change in flowrate is found. The numerical predictions have been compared with test data and empirical equation based on Bernoulli's equation.
Admixture analysis of age at onset in first episode bipolar disorder.
Nowrouzi, Behdin; McIntyre, Roger S; MacQueen, Glenda; Kennedy, Sidney H; Kennedy, James L; Ravindran, Arun; Yatham, Lakshmi; De Luca, Vincenzo
2016-09-01
Many studies have used the admixture analysis to separate age-at-onset (AAO) subgroups in bipolar disorder, but none of them examined first episode patients. The purpose of this study was to investigate the influence of clinical variables on AAO in first episode bipolar patients. The admixture analysis was applied to identify the model best fitting the observed AAO distribution of a sample of 194 patients with DSM-IV diagnosis of bipolar disorder and the finite mixture model was applied to assess the effect of clinical covariates on AAO. Using the BIC method, the model that was best fitting the observed distribution of AAO was a mixture of three normal distributions. We identified three AAO groups: early age-at-onset (EAO) (µ=18.0, σ=2.88), intermediate-age-at-onset (IAO) (µ=28.7, σ=3.5), and late-age-at-onset (LAO) (µ=47.3, σ=7.8), comprising 69%, 22%, and 9% of the sample respectively. Our first episode sample distribution model was significantly different from most of the other studies that applied the mixture analysis. The main limitation is that our sample may have inadequate statistical power to detect the clinical associations with the AAO subgroups. This study confirms that bipolar disorder can be classified into three groups based on AAO distribution. The data reported in our paper provide more insight into the diagnostic heterogeneity of bipolar disorder across the three AAO subgroups. Copyright © 2016 Elsevier B.V. All rights reserved.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.
Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin
2018-04-05
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
Dynamics relationship between stock prices and economic variables in Malaysia
NASA Astrophysics Data System (ADS)
Chun, Ooi Po; Arsad, Zainudin; Huen, Tan Bee
2014-07-01
Knowledge on linkages between stock prices and macroeconomic variables are essential in the formulation of effective monetary policy. This study investigates the relationship between stock prices in Malaysia (KLCI) with four selected macroeconomic variables, namely industrial production index (IPI), quasi money supply (MS2), real exchange rate (REXR) and 3-month Treasury bill (TRB). The variables used in this study are monthly data from 1996 to 2012. Vector error correction (VEC) model and Kalman filter (KF) technique are utilized to assess the impact of macroeconomic variables on the stock prices. The results from the cointegration test revealed that the stock prices and macroeconomic variables are cointegrated. Different from the constant estimate from the static VEC model, the KF estimates noticeably exhibit time-varying attributes over the entire sample period. The varying estimates of the impact coefficients should be better reflect the changing economic environment. Surprisingly, IPI is negatively related to the KLCI with the estimates of the impact slowly increase and become positive in recent years. TRB is found to be generally negatively related to the KLCI with the impact fluctuating along the constant estimate of the VEC model. The KF estimates for REXR and MS2 show a mixture of positive and negative impact on the KLCI. The coefficients of error correction term (ECT) are negative in majority of the sample period, signifying the stock prices responded to stabilize any short term deviation in the economic system. The findings from the KF model indicate that any implication that is based on the usual static model may lead to authorities implementing less appropriate policies.
Improved materials and processes of dispenser cathodes
NASA Astrophysics Data System (ADS)
Longo, R. T.; Sundquist, W. F.; Adler, E. A.
1984-08-01
Several process variables affecting the final electron emission properties of impregnated dispenser cathodes were investigated. In particular, the influence of billet porosity, impregnant composition and purity, and osmium-ruthenium coating were studied. Work function and cathode evaporation data were used to evaluate cathode performance and to formulate a model of cathode activation and emission. Results showed that sorted tungsten powder can be reproducibly fabricated into cathode billets. Billet porosity was observed to have the least effect on cathode performance. Use of the 4:1:1 aluminate mixture resulted in lower work functions than did use of the 5:3:2 mixture. Under similar drawout conditions, the coated cathodes showed superior emission relative to uncoated cathodes. In actual Pierce gun structures under accelerated life test, the influence of impregnated sulfur is clearly shown to reduce cathode performance.
NASA Astrophysics Data System (ADS)
Lee, H.-H.; Chen, S.-H.; Kleeman, M. J.; Zhang, H.; DeNero, S. P.; Joe, D. K.
2015-11-01
The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-dimensional chemical variable (X, Z, Y, Size Bins, Source Types, Species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and longwave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into CCN at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.
Behavior of complex mixtures in aquatic environments: a synthesis of PNL ecological research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fickeisen, D.H.; Vaughan, B.E.
1984-06-01
The term complex mixture has been recently applied to energy-related process streams, products and wastes that typically contain hundreds or thousands of individual organic compounds, like petroleum or synthetic fuel oils; but it is more generally applicable. A six-year program of ecological research has focused on four areas important to understanding the environmental behavior of complex mixtures: physicochemical variables, individual organism responses, ecosystems-level determinations, and metabolism. Of these areas, physicochemical variables and organism responses were intensively studied; system-level determinations and metabolism represent more recent directions. Chemical characterization was integrated throughout all areas of the program, and state-of-the-art methods were applied.more » 155 references, 35 figures, 4 tables.« less
NASA Astrophysics Data System (ADS)
Ushakov, Anton; Orlov, Alexey; Sovach, Victor P.
2018-03-01
This article presents the results of research filling of gas centrifuge cascade for separation of the multicomponent isotope mixture with process gas by various feed flow rate. It has been used mathematical model of the nonstationary hydraulic and separation processes occurring in the gas centrifuge cascade. The research object is definition of the regularity transient of nickel isotopes into cascade during filling of the cascade. It is shown that isotope concentrations into cascade stages after its filling depend on variable parameters and are not equal to its concentration on initial isotope mixture (or feed flow of cascade). This assumption is used earlier any researchers for modeling such nonstationary process as set of steady-state concentration of isotopes into cascade. Article shows physical laws of isotope distribution into cascade stage after its filling. It's shown that varying each parameters of cascade (feed flow rate, feed stage number or cascade stage number) it is possible to change isotope concentration on output cascade flows (light or heavy fraction) for reduction of duration of further process to set of steady-state concentration of isotopes into cascade.
Connolly, John; Sebastià, Maria-Teresa; Kirwan, Laura; Finn, John Anthony; Llurba, Rosa; Suter, Matthias; Collins, Rosemary P; Porqueddu, Claudio; Helgadóttir, Áslaug; Baadshaug, Ole H; Bélanger, Gilles; Black, Alistair; Brophy, Caroline; Čop, Jure; Dalmannsdóttir, Sigridur; Delgado, Ignacio; Elgersma, Anjo; Fothergill, Michael; Frankow-Lindberg, Bodil E; Ghesquiere, An; Golinski, Piotr; Grieu, Philippe; Gustavsson, Anne-Maj; Höglind, Mats; Huguenin-Elie, Olivier; Jørgensen, Marit; Kadziuliene, Zydre; Lunnan, Tor; Nykanen-Kurki, Paivi; Ribas, Angela; Taube, Friedhelm; Thumm, Ulrich; De Vliegher, Alex; Lüscher, Andreas
2018-03-01
Grassland diversity can support sustainable intensification of grassland production through increased yields, reduced inputs and limited weed invasion. We report the effects of diversity on weed suppression from 3 years of a 31-site continental-scale field experiment.At each site, 15 grassland communities comprising four monocultures and 11 four-species mixtures based on a wide range of species' proportions were sown at two densities and managed by cutting. Forage species were selected according to two crossed functional traits, "method of nitrogen acquisition" and "pattern of temporal development".Across sites, years and sown densities, annual weed biomass in mixtures and monocultures was 0.5 and 2.0 t DM ha -1 (7% and 33% of total biomass respectively). Over 95% of mixtures had weed biomass lower than the average of monocultures, and in two-thirds of cases, lower than in the most suppressive monoculture (transgressive suppression). Suppression was significantly transgressive for 58% of site-years. Transgressive suppression by mixtures was maintained across years, independent of site productivity.Based on models, average weed biomass in mixture over the whole experiment was 52% less (95% confidence interval: 30%-75%) than in the most suppressive monoculture. Transgressive suppression of weed biomass was significant at each year across all mixtures and for each mixture.Weed biomass was consistently low across all mixtures and years and was in some cases significantly but not largely different from that in the equiproportional mixture. The average variability (standard deviation) of annual weed biomass within a site was much lower for mixtures (0.42) than for monocultures (1.77). Synthesis and applications . Weed invasion can be diminished through a combination of forage species selected for complementarity and persistence traits in systems designed to reduce reliance on fertiliser nitrogen. In this study, effects of diversity on weed suppression were consistently strong across mixtures varying widely in species' proportions and over time. The level of weed biomass did not vary greatly across mixtures varying widely in proportions of sown species. These diversity benefits in intensively managed grasslands are relevant for the sustainable intensification of agriculture and, importantly, are achievable through practical farm-scale actions.
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tie, S. S.; Martini, P.; Mudd, D.
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
Tie, S. S.; Martini, P.; Mudd, D.; ...
2017-02-15
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
Schmiege, Sarah J; Bryan, Angela D
2016-04-01
Justice-involved adolescents engage in high levels of risky sexual behavior and substance use, and understanding potential relationships among these constructs is important for effective HIV/STI prevention. A regression mixture modeling approach was used to determine whether subgroups could be identified based on the regression of two indicators of sexual risk (condom use and frequency of intercourse) on three measures of substance use (alcohol, marijuana and hard drugs). Three classes were observed among n = 596 adolescents on probation: none of the substances predicted outcomes for approximately 18 % of the sample; alcohol and marijuana use were predictive for approximately 59 % of the sample, and marijuana use and hard drug use were predictive in approximately 23 % of the sample. Demographic, individual difference, and additional sexual and substance use risk variables were examined in relation to class membership. Findings are discussed in terms of understanding profiles of risk behavior among at-risk youth.
A Latent Growth Mixture Modeling Approach to PTSD Symptoms in Rape Victims.
Armour, Cherie; Shevlin, Mark; Elklit, Ask; Mroczek, Dan
2012-03-01
The research literature has suggested that longitudinal changes in posttraumatic stress disorder (PTSD) could be adequately described in terms of one universal trajectory, with individual differences in baseline levels (intercept) and rate of change (slope) being negligible. However, not everyone who has experienced a trauma is diagnosed with PTSD, and symptom severity levels differ between individuals exposed to similar traumas. The current study employed the latent growth mixture modeling technique to test for multiple trajectories using data from a sample of Danish rape victims (N = 255). In addition, the analysis aimed to determine whether a number of explanatory variables could differentiate between the trajectories (age, acute stress disorder [ASD], and perceived social support). Results concluded the existence of two PTSD trajectories. ASD was found to be the only significant predictor of one trajectory characterized by high initial levels of PTSD symptomatology. The present findings confirmed the existence of multiple trajectories with regard to PTSD symptomatology in a way that may be useful to clinicians working with this population.
Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures
Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.
2016-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038
Torres-Carvajal, Omar; Schulte, James A; Cadle, John E
2006-04-01
The South American iguanian lizard genus Stenocercus includes 54 species occurring mostly in the Andes and adjacent lowland areas from northern Venezuela and Colombia to central Argentina at elevations of 0-4000m. Small taxon or character sampling has characterized all phylogenetic analyses of Stenocercus, which has long been recognized as sister taxon to the Tropidurus Group. In this study, we use mtDNA sequence data to perform phylogenetic analyses that include 32 species of Stenocercus and 12 outgroup taxa. Monophyly of this genus is strongly supported by maximum parsimony and Bayesian analyses. Evolutionary relationships within Stenocercus are further analyzed with a Bayesian implementation of a general mixture model, which accommodates variability in the pattern of evolution across sites. These analyses indicate a basal split of Stenocercus into two clades, one of which receives very strong statistical support. In addition, we test previous hypotheses using non-parametric and parametric statistical methods, and provide a phylogenetic classification for Stenocercus.
A Latent Growth Mixture Modeling Approach to PTSD Symptoms in Rape Victims
Armour, Cherie; Shevlin, Mark; Elklit, Ask; Mroczek, Dan
2012-01-01
The research literature has suggested that longitudinal changes in posttraumatic stress disorder (PTSD) could be adequately described in terms of one universal trajectory, with individual differences in baseline levels (intercept) and rate of change (slope) being negligible. However, not everyone who has experienced a trauma is diagnosed with PTSD, and symptom severity levels differ between individuals exposed to similar traumas. The current study employed the latent growth mixture modeling technique to test for multiple trajectories using data from a sample of Danish rape victims (N = 255). In addition, the analysis aimed to determine whether a number of explanatory variables could differentiate between the trajectories (age, acute stress disorder [ASD], and perceived social support). Results concluded the existence of two PTSD trajectories. ASD was found to be the only significant predictor of one trajectory characterized by high initial levels of PTSD symptomatology. The present findings confirmed the existence of multiple trajectories with regard to PTSD symptomatology in a way that may be useful to clinicians working with this population. PMID:22661909
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Quadrature Moments Method for the Simulation of Turbulent Reactive Flows
NASA Technical Reports Server (NTRS)
Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.
2003-01-01
A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Evaluation of I-FIT results and machine variability using MnRoad test track mixtures.
DOT National Transportation Integrated Search
2017-06-01
The Illinois Flexibility Index Test (I-FIT) was developed to distinguish between different mixtures in terms of potential cracking. Several : machines were manufactured and are currently available to perform the I-FIT. This report presents the result...
Evaluation of a locally homogeneous model of spray evaporation
NASA Technical Reports Server (NTRS)
Shearer, A. J.; Faeth, G. M.
1979-01-01
A model of spray evaporation which employs a second-order turbulence model in conjunction with the locally homogeneous flow approximation, which implies infinitely fast interphase transport rates is presented. Measurements to test the model were completed for single phase constant and variable density jets, as well as an evaporating spray in stagnant air. Profiles of mean velocity, composition, temperature and drop size distribution as well as velocity fluctuations and Reynolds stress, were measured within the spray. Predictions were in agreement with measurements in single phase flows and also with many characteristics of the spray, e.g. flow width, radial profiles of mean and turbulent quantities, and the axial rate of decay of mean velocity and mixture fraction.
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
Efforts in support of the development of multicrop production monitoring capability are reported. In particular, segment level proportion estimation techniques based upon a mixture model were investigated. Efforts have dealt primarily with evaluation of current techniques and development of alternative ones. A comparison of techniques is provided on both simulated and LANDSAT data along with an analysis of the quality of profile variables obtained from LANDSAT data.
A smooth mixture of Tobits model for healthcare expenditure.
Keane, Michael; Stavrunova, Olena
2011-09-01
This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
Banerjee, D; Dalmonte, M; Müller, M; Rico, E; Stebler, P; Wiese, U-J; Zoller, P
2012-10-26
Using a Fermi-Bose mixture of ultracold atoms in an optical lattice, we construct a quantum simulator for a U(1) gauge theory coupled to fermionic matter. The construction is based on quantum links which realize continuous gauge symmetry with discrete quantum variables. At low energies, quantum link models with staggered fermions emerge from a Hubbard-type model which can be quantum simulated. This allows us to investigate string breaking as well as the real-time evolution after a quench in gauge theories, which are inaccessible to classical simulation methods.
Montgomery, Katherine L; Vaughn, Michael G; Thompson, Sanna J; Howard, Matthew O
2013-11-01
Research on juvenile offenders has largely treated this population as a homogeneous group. However, recent findings suggest that this at-risk population may be considerably more heterogeneous than previously believed. This study compared mixture regression analyses with standard regression techniques in an effort to explain how known factors such as distress, trauma, and personality are associated with drug abuse among juvenile offenders. Researchers recruited 728 juvenile offenders from Missouri juvenile correctional facilities for participation in this study. Researchers investigated past-year substance use in relation to the following variables: demographic characteristics (gender, ethnicity, age, familial use of public assistance), antisocial behavior, and mental illness symptoms (psychopathic traits, psychiatric distress, and prior trauma). Results indicated that standard and mixed regression approaches identified significant variables related to past-year substance use among this population; however, the mixture regression methods provided greater specificity in results. Mixture regression analytic methods may help policy makers and practitioners better understand and intervene with the substance-related subgroups of juvenile offenders.
Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.
de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard
2018-03-01
Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Kelley, Mary E.; Anderson, Stewart J.
2008-01-01
Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711
Evaluation of alfalfa-tall fescue mixtures across multiple environments
USDA-ARS?s Scientific Manuscript database
Binary grass-legume mixtures can benefit forage production systems in different ways helping growers cope both with increasing input costs (e.g., N fertilizer, herbicides) and potentially more variable weather. The main objective of this study was to evaluate alfalfa (Medicago sativa L.) and tall f...
Temporal and spatial patterns in vegetation and atmospheric properties from AVIRIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D.A.; Green, R.O.; Adams, J.B.
1997-12-01
Little research has focused on the use of imaging spectrometry for change detection. In this paper, the authors apply Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to the monitoring of seasonal changes in atmospheric water vapor, liquid water, and surface cover in the vicinity of the Jasper Ridge, CA, for three dates in 1992. Apparent surface reflectance was retrieved and water vapor and liquid water mapped by using a radiative-transfer-based inversion that accounts for spatially variable atmospheres. Spectral mixture analysis (SMA) was used to model reflectance data as mixtures of green vegetation (GV), nonphotosynthetic vegetation (NPV), soil, and shade. Temporal andmore » spatial patterns in endmember fractions and liquid water were compared to the normalized difference vegetation index (NDVI). The reflectance retrieval algorithm was tested by using a temporally invariant target.« less
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
A two-component Bayesian mixture model to identify implausible gestational age.
Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz
2016-01-01
Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.
NASA Astrophysics Data System (ADS)
Izquierdo, Germán; Blanquet-Jaramillo, Roberto C.; Sussman, Roberto A.
2018-01-01
The quasi-local scalar variables approach is applied to a spherically symmetric inhomogeneous Lemaître-Tolman-Bondi metric containing a mixture of non-relativistic cold dark matter and coupled dark energy with constant equation of state. The quasi-local coupling term considered is proportional to the quasi-local cold dark matter energy density and a quasi-local Hubble factor-like scalar via a coupling constant α . The autonomous numerical system obtained from the evolution equations is classified for different choices of the free parameters: the adiabatic constant of the dark energy w and α . The presence of a past attractor in a non-physical region of the energy densities phase-space of the system makes the coupling term non physical when the energy flows from the matter to the dark energy in order to avoid negative values of the dark energy density in the past. On the other hand, if the energy flux goes from dark energy to dark matter, the past attractor lies in a physical region. The system is also numerically solved for some interesting initial profiles leading to different configurations: an ever expanding mixture, a scenario where the dark energy is completely consumed by the non-relativistic matter by means of the coupling term, a scenario where the dark energy disappears in the inner layers while the outer layers expand as a mixture of both sources, and, finally, a structure formation toy model scenario, where the inner shells containing the mixture collapse while the outer shells expand.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
A Volume-Fraction Based Two-Phase Constitutive Model for Blood
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Rui; Massoudi, Mehrdad; Hund, S.J.
2008-06-01
Mechanically-induced blood trauma such as hemolysis and thrombosis often occurs at microscopic channels, steps and crevices within cardiovascular devices. A predictive mathematical model based on a broad understanding of hemodynamics at micro scale is needed to mitigate these effects, and is the motivation of this research project. Platelet transport and surface deposition is important in thrombosis. Microfluidic experiments have previously revealed a significant impact of red blood cell (RBC)-plasma phase separation on platelet transport [5], whereby platelet localized concentration can be enhanced due to a non-uniform distribution of RBCs of blood flow in a capillary tube and sudden expansion. However,more » current platelet deposition models either totally ignored RBCs in the fluid by assuming a zero sample hematocrit or treated them as being evenly distributed. As a result, those models often underestimated platelet advection and deposition to certain areas [2]. The current study aims to develop a two-phase blood constitutive model that can predict phase separation in a RBC-plasma mixture at the micro scale. The model is based on a sophisticated theory known as theory of interacting continua, i.e., mixture theory. The volume fraction is treated as a field variable in this model, which allows the prediction of concentration as well as velocity profiles of both RBC and plasma phases. The results will be used as the input of successive platelet deposition models.« less
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
Advanced dielectric continuum model of preferential solvation
NASA Astrophysics Data System (ADS)
Basilevsky, Mikhail; Odinokov, Alexey; Nikitina, Ekaterina; Grigoriev, Fedor; Petrov, Nikolai; Alfimov, Mikhail
2009-01-01
A continuum model for solvation effects in binary solvent mixtures is formulated in terms of the density functional theory. The presence of two variables, namely, the dimensionless solvent composition y and the dimensionless total solvent density z, is an essential feature of binary systems. Their coupling, hidden in the structure of the local dielectric permittivity function, is postulated at the phenomenological level. Local equilibrium conditions are derived by a variation in the free energy functional expressed in terms of the composition and density variables. They appear as a pair of coupled equations defining y and z as spatial distributions. We consider the simplest spherically symmetric case of the Born-type ion immersed in the benzene/dimethylsulfoxide (DMSO) solvent mixture. The profiles of y(R ) and z(R ) along the radius R, which measures the distance from the ion center, are found in molecular dynamics (MD) simulations. It is shown that for a given solute ion z(R ) does not depend significantly on the composition variable y. A simplified solution is then obtained by inserting z(R ), found in the MD simulation for the pure DMSO, in the single equation which defines y(R ). In this way composition dependences of the main solvation effects are investigated. The local density augmentation appears as a peak of z(R ) at the ion boundary. It is responsible for the fine solvation effects missing when the ordinary solvation theories, in which z =1, are applied. These phenomena, studied for negative ions, reproduce consistently the simulation results. For positive ions the simulation shows that z ≫1 (z =5-6 at the maximum of the z peak), which means that an extremely dense solvation shell is formed. In such a situation the continuum description fails to be valid within a consistent parametrization.
Biotic and abiotic variables influencing plant litter breakdown in streams: a global study.
Boyero, Luz; Pearson, Richard G; Hui, Cang; Gessner, Mark O; Pérez, Javier; Alexandrou, Markos A; Graça, Manuel A S; Cardinale, Bradley J; Albariño, Ricardo J; Arunachalam, Muthukumarasamy; Barmuta, Leon A; Boulton, Andrew J; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G; Dudgeon, David; Encalada, Andrea C; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S; Gonçalves, José F; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S; Pringle, Catherine M; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M
2016-04-27
Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. © 2016 The Author(s).
Biotic and abiotic variables influencing plant litter breakdown in streams: a global study
Pearson, Richard G.; Hui, Cang; Gessner, Mark O.; Pérez, Javier; Alexandrou, Markos A.; Graça, Manuel A. S.; Cardinale, Bradley J.; Albariño, Ricardo J.; Arunachalam, Muthukumarasamy; Barmuta, Leon A.; Boulton, Andrew J.; Bruder, Andreas; Callisto, Marcos; Chauvet, Eric; Death, Russell G.; Dudgeon, David; Encalada, Andrea C.; Ferreira, Verónica; Figueroa, Ricardo; Flecker, Alexander S.; Gonçalves, José F.; Helson, Julie; Iwata, Tomoya; Jinggut, Tajang; Mathooko, Jude; Mathuriau, Catherine; M'Erimba, Charles; Moretti, Marcelo S.; Pringle, Catherine M.; Ramírez, Alonso; Ratnarajah, Lavenia; Rincon, José; Yule, Catherine M.
2016-01-01
Plant litter breakdown is a key ecological process in terrestrial and freshwater ecosystems. Streams and rivers, in particular, contribute substantially to global carbon fluxes. However, there is little information available on the relative roles of different drivers of plant litter breakdown in fresh waters, particularly at large scales. We present a global-scale study of litter breakdown in streams to compare the roles of biotic, climatic and other environmental factors on breakdown rates. We conducted an experiment in 24 streams encompassing latitudes from 47.8° N to 42.8° S, using litter mixtures of local species differing in quality and phylogenetic diversity (PD), and alder (Alnus glutinosa) to control for variation in litter traits. Our models revealed that breakdown of alder was driven by climate, with some influence of pH, whereas variation in breakdown of litter mixtures was explained mainly by litter quality and PD. Effects of litter quality and PD and stream pH were more positive at higher temperatures, indicating that different mechanisms may operate at different latitudes. These results reflect global variability caused by multiple factors, but unexplained variance points to the need for expanded global-scale comparisons. PMID:27122551
Model selection for clustering of pharmacokinetic responses.
Guerra, Rui P; Carvalho, Alexandra M; Mateus, Paulo
2018-08-01
Pharmacokinetics comprises the study of drug absorption, distribution, metabolism and excretion over time. Clinical pharmacokinetics, focusing on therapeutic management, offers important insights towards personalised medicine through the study of efficacy and toxicity of drug therapies. This study is hampered by subject's high variability in drug blood concentration, when starting a therapy with the same drug dosage. Clustering of pharmacokinetics responses has been addressed recently as a way to stratify subjects and provide different drug doses for each stratum. This clustering method, however, is not able to automatically determine the correct number of clusters, using an user-defined parameter for collapsing clusters that are closer than a given heuristic threshold. We aim to use information-theoretical approaches to address parameter-free model selection. We propose two model selection criteria for clustering pharmacokinetics responses, founded on the Minimum Description Length and on the Normalised Maximum Likelihood. Experimental results show the ability of model selection schemes to unveil the correct number of clusters underlying the mixture of pharmacokinetics responses. In this work we were able to devise two model selection criteria to determine the number of clusters in a mixture of pharmacokinetics curves, advancing over previous works. A cost-efficient parallel implementation in Java of the proposed method is publicly available for the community. Copyright © 2018 Elsevier B.V. All rights reserved.
Hyper-Spectral Image Analysis With Partially Latent Regression and Spatial Markov Dependencies
NASA Astrophysics Data System (ADS)
Deleforge, Antoine; Forbes, Florence; Ba, Sileye; Horaud, Radu
2015-09-01
Hyper-spectral data can be analyzed to recover physical properties at large planetary scales. This involves resolving inverse problems which can be addressed within machine learning, with the advantage that, once a relationship between physical parameters and spectra has been established in a data-driven fashion, the learned relationship can be used to estimate physical parameters for new hyper-spectral observations. Within this framework, we propose a spatially-constrained and partially-latent regression method which maps high-dimensional inputs (hyper-spectral images) onto low-dimensional responses (physical parameters such as the local chemical composition of the soil). The proposed regression model comprises two key features. Firstly, it combines a Gaussian mixture of locally-linear mappings (GLLiM) with a partially-latent response model. While the former makes high-dimensional regression tractable, the latter enables to deal with physical parameters that cannot be observed or, more generally, with data contaminated by experimental artifacts that cannot be explained with noise models. Secondly, spatial constraints are introduced in the model through a Markov random field (MRF) prior which provides a spatial structure to the Gaussian-mixture hidden variables. Experiments conducted on a database composed of remotely sensed observations collected from the Mars planet by the Mars Express orbiter demonstrate the effectiveness of the proposed model.
Angeli, Nicole F; Lundgren, Ian F; Pollock, Clayton G; Hillis-Starr, Zandy M; Fitzgerald, Lee A
2018-03-01
Population size is widely used as a unit of ecological analysis, yet to estimate population size requires accounting for observed and latent heterogeneity influencing dispersion of individuals across landscapes. In newly established populations, such as when animals are translocated for conservation, dispersal and availability of resources influence patterns of abundance. We developed a process to estimate population size using N-mixture models and spatial models for newly established and dispersing populations. We used our approach to estimate the population size of critically endangered St. Croix ground lizards (Ameiva polops) five years after translocation of 57 individuals to Buck Island, an offshore island of St. Croix, United States Virgin Islands. Estimates of population size incorporated abiotic variables, dispersal limits, and operative environmental temperature available to the lizards to account for low species detection. Operative environmental temperature and distance from the translocation site were always important in fitting the N-mixture model indicating effects of dispersal and species biology on estimates of population size. We found that the population is increasing its range across the island by 5-10% every six months. We spatially interpolated site-specific abundance from the N-mixture model to the entire island, and we estimated 1,473 (95% CI, 940-1,802) St. Croix ground lizards on Buck Island in 2013 corresponding to survey results. This represents a 26-fold increase since the translocation. We predicted the future dispersal of the lizards to all habitats on Buck Island, with the potential for the population to increase by another five times in the future. Incorporating biologically relevant covariates as explicit parameters in population models can improve predictions of population size and the future spread of species introduced to new localities. © 2018 by the Ecological Society of America.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
The Abelian Higgs model on Optical Lattice?
NASA Astrophysics Data System (ADS)
Meurice, Yannick; Tsai, Shan-Wen; Bazavov, Alexei; Zhang, Jin
2015-03-01
We study the Lattice Gauge Theory of the U(1)-Higgs model in 1+1 dimensions in the strongly coupled regime. We discuss the plaquette corrections to the effective theory where link variables are integrated out. We discuss matching with the second-order perturbation theory effective Hamiltonian for various Bose-Hubbard models. This correspondence can be exploited for building a lattice gauge theory simulator on optical lattices. We propose to implement the quantum rotors which appear in the Hamiltonian formulation using Bose mixtures or p-orbitals. Recent progress on magnetic effects in 2+1 dimensions will be discussed. Supported by the Army Research Office of the Department of Defense under Award Number W911NF-13-1-0119.
Evaluation of Student Performance through a Multidimensional Finite Mixture IRT Model.
Bacci, Silvia; Bartolucci, Francesco; Grilli, Leonardo; Rampichini, Carla
2017-01-01
In the Italian academic system, a student can enroll for an exam immediately after the end of the teaching period or can postpone it; in this second case the exam result is missing. We propose an approach for the evaluation of a student performance throughout the course of study, accounting also for nonattempted exams. The approach is based on an item response theory model that includes two discrete latent variables representing student performance and priority in selecting the exams to take. We explicitly account for nonignorable missing observations as the indicators of attempted exams also contribute to measure the performance (within-item multidimensionality). The model also allows for individual covariates in its structural part.
Surface complexation modeling of Cu(II) adsorption on mixtures of hydrous ferric oxide and kaolinite
Lund, Tracy J; Koretsky, Carla M; Landry, Christopher J; Schaller, Melinda S; Das, Soumya
2008-01-01
Background The application of surface complexation models (SCMs) to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO), pure kaolinite (from two sources) and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs) describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples. PMID:18783619
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
Geophysics and Nanosciences: Nano to Micro to Meso to Macro Scale Swelling Soils
NASA Astrophysics Data System (ADS)
Cushman, J.
2003-04-01
We use statistical mechanical simulations of nanoporous materials to motivate a choice of independent constitutive variables for a multiscale mixture theory of swelling soils. A video will illustrate the structural behavior of fluids in nanopores when they are adsorbed from a bulk phase vapor to form capillaries on the nanoscale. These simulations suggest that when a swelling soil is very dry, the full strain tensor for the liquid phase should be included in the list of independent variables in any mixture theory. We use this information to develop a three-scale (micro, meso, macro) mixture theory for swelling soils. For a simplified case, we present the underlying multiscale field equations and constitutive theory, solve the resultant well posed system numerically, and present some graphical results for a drying and shrinking body.
[Mix 10] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2011-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 4] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2011-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 5] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2011-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 13] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 3] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2010-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 14] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 7] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2004-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 16] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2004-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 6] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 12] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2004-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 1] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2010-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 2] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2010-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 8] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 11] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 15] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2004-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
[Mix 9] HMACP mixture design : combined gradation.
DOT National Transportation Integrated Search
2007-01-01
Getting Started: Begin with the Combined Gradations or Summary sheet. Here you will select dependent variables such as specification year, mix type, asphalt content, combined aggregate, and others. These variable will affect the calculation of other ...
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Discrete element modelling of bedload transport
NASA Astrophysics Data System (ADS)
Loyer, A.; Frey, P.
2011-12-01
Discrete element modelling (DEM) has been widely used in solid mechanics and in granular physics. In this type of modelling, each individual particle is taken into account and intergranular interactions are modelled with simple laws (e.g. Coulomb friction). Gravity and contact forces permit to solve the dynamical behaviour of the system. DEM is interesting to model configurations and access to parameters not directly available in laboratory experimentation, hence the term "numerical experimentations" sometimes used to describe DEM. DEM was used to model bedload transport experiments performed at the particle scale with spherical glass beads in a steep and narrow flume. Bedload is the larger material that is transported on the bed on stream channels. It has a great geomorphic impact. Physical processes ruling bedload transport and more generally coarse-particle/fluid systems are poorly known, arguably because granular interactions have been somewhat neglected. An existing DEM code (PFC3D) already computing granular interactions was used. We implemented basic hydrodynamic forces to model the fluid interactions (buoyancy, drag, lift). The idea was to use the minimum number of ingredients to match the experimental results. Experiments were performed with one-size and two-size mixtures of coarse spherical glass beads entrained by a shallow turbulent and supercritical water flow down a steep channel with a mobile bed. The particle diameters were 4 and 6mm, the channel width 6.5mm (about the same width as the coarser particles) and the channel inclination was typically 10%. The water flow rate and the particle rate were kept constant at the upstream entrance and adjusted to obtain bedload transport equilibrium. Flows were filmed from the side by a high-speed camera. Using image processing algorithms made it possible to determine the position, velocity and trajectory of both smaller and coarser particles. Modelled and experimental particle velocity and concentration depth profiles were compared in the case of the one-size mixture. The turbulent fluid velocity profile was prescribed and attached to the variable upper bedline. Provided the upper bedline was calculated with a refined space and time resolution, a fair agreement between DEM and experiments was reached. Experiments with two-size mixtures were designed to study vertical grain size sorting or segregation patterns. Sorting is arguably the reason why the predictive capacity of bedload formulations remains so poor. Modelling of the two-size mixture was also performed and gave promising qualitative results.
Generation of a mixture model ground-motion prediction equation for Northern Chile
NASA Astrophysics Data System (ADS)
Haendel, A.; Kuehn, N. M.; Scherbaum, F.
2012-12-01
In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from observed ground motion data in the region of interest) is then transferring information from other regions to the region where the observations have been produced in a data driven way. The backbone model is learned by comparing the model predictions to observations of the target region. For each observation and each model, the likelihood of an observation given a certain GMPE is calculated. Mixture weights can then be assigned using the expectation maximization (EM) algorithm or Bayesian inference. The new method is used to generate a backbone reference model for Northern Chile, an area for which no dedicated GMPE exists. Strong motion recordings from the target area are used to learn the backbone model from a set of 10 GMPEs developed for different subduction zones of the world. The formation of mixture models is done individually for interface and intraslab type events. The ability of the resulting backbone models to describe ground motions in Northern Chile is then compared to the predictive performance of their constituent models.
NASA Astrophysics Data System (ADS)
Juesas, P.; Ramasso, E.
2016-12-01
Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
Response properties in the adsorption-desorption model on a triangular lattice
NASA Astrophysics Data System (ADS)
Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.
2016-06-01
The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.
Yamaura, Yuichi; Kery, Marc; Royle, Andy
2016-01-01
Community N-mixture abundance models for replicated counts provide a powerful and novel framework for drawing inferences related to species abundance within communities subject to imperfect detection. To assess the performance of these models, and to compare them to related community occupancy models in situations with marginal information, we used simulation to examine the effects of mean abundance (λ¯: 0.1, 0.5, 1, 5), detection probability (p¯: 0.1, 0.2, 0.5), and number of sampling sites (n site : 10, 20, 40) and visits (n visit : 2, 3, 4) on the bias and precision of species-level parameters (mean abundance and covariate effect) and a community-level parameter (species richness). Bias and imprecision of estimates decreased when any of the four variables (λ¯, p¯, n site , n visit ) increased. Detection probability p¯ was most important for the estimates of mean abundance, while λ¯ was most influential for covariate effect and species richness estimates. For all parameters, increasing n site was more beneficial than increasing n visit . Minimal conditions for obtaining adequate performance of community abundance models were n site ≥ 20, p¯ ≥ 0.2, and λ¯ ≥ 0.5. At lower abundance, the performance of community abundance and community occupancy models as species richness estimators were comparable. We then used additive partitioning analysis to reveal that raw species counts can overestimate β diversity both of species richness and the Shannon index, while community abundance models yielded better estimates. Community N-mixture abundance models thus have great potential for use with community ecology or conservation applications provided that replicated counts are available.
Johnson, B. Thomas
1989-01-01
Traditional single species toxicity tests and multiple component laboratory-scaled microcosm assays were combined to assess the toxicological hazard of diesel oil, a model complex mixture, to a model aquatic environment. The immediate impact of diesel oil dosed on a freshwater community was studied in a model pond microcosm over 14 days: a 7-day dosage and a 7-day recovery period. A multicomponent laboratory microcosm was designed to monitor the biological effects of diesel oil (1·0 mg litre−1) on four components: water, sediment (soil + microbiota), plants (aquatic macrophytes and algae), and animals (zooplanktonic and zoobenthic invertebrates). To determine the sensitivity of each part of the community to diesel oil contamination and how this model community recovered when the oil dissipated, limnological, toxicological, and microbiological variables were considered. Our model revealed these significant occurrences during the spill period: first, a community production and respiration perturbation, characterized in the water column by a decrease in dissolved oxygen and redox potential and a concomitant increase in alkalinity and conductivity; second, marked changes in microbiota of sediments that included bacterial heterotrophic dominance and a high heterotrophic index (0·6), increased bacterial productivity, and the marked increases in numbers of saprophytic bacteria (10 x) and bacterial oil degraders (1000 x); and third, column water acutely toxic (100% mortality) to two model taxa: Selenastrum capricornutum and Daphnia magna. Following the simulated clean-up procedure to remove the oil slick, the recovery period of this freshwater microcosm was characterized by a return to control values. This experimental design emphasized monitoring toxicological responses in aquatic microcosm; hence, we proposed the term ‘toxicosm’ to describe this approach to aquatic toxicological hazard evaluation. The toxicosm as a valuable toxicological tool for screening aquatic contaminants was demonstrated using diesel oil as a model complex mixture.
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Dynamic control of a homogeneous charge compression ignition engine
Duffy, Kevin P [Metamora, IL; Mehresh, Parag [Peoria, IL; Schuh, David [Peoria, IL; Kieser, Andrew J [Morton, IL; Hergart, Carl-Anders [Peoria, IL; Hardy, William L [Peoria, IL; Rodman, Anthony [Chillicothe, IL; Liechty, Michael P [Chillicothe, IL
2008-06-03
A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schölzel, C.; Friederichs, P.
2008-10-01
Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.
Pdf modeling for premixed turbulent combustion based on the properties of iso-concentration surfaces
NASA Technical Reports Server (NTRS)
Vervisch, L.; Kollmann, W.; Bray, K. N. C.; Mantel, T.
1994-01-01
In premixed turbulent flames the presence of intense mixing zones located in front of and behind the flame surface leads to a requirement to study the behavior of iso-concentration surfaces defined for all values of the progress variable (equal to unity in burnt gases and to zero in fresh mixtures). To support this study, some theoretical and mathematical tools devoted to level surfaces are first developed. Then a database of direct numerical simulations of turbulent premixed flames is generated and used to investigate the internal structure of the flame brush, and a new pdf model based on the properties of iso-surfaces is proposed.
Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach
Bar, Haim Y.; Lillard, Dean R.
2012-01-01
When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577
Saldaña, Erick; Siche, Raúl; da Silva Pinto, Jair Sebastião; de Almeida, Marcio Aurélio; Selani, Miriam Mabel; Rios-Mera, Juan; Contreras-Castillo, Carmen J
2018-02-01
This study aims to optimize simultaneously the lipid profile and instrumental hardness of low-fat mortadella. For lipid mixture optimization, the overlapping of surface boundaries was used to select the quantities of canola, olive, and fish oils, in order to maximize PUFAs, specifically the long-chain n-3 fatty acids (eicosapentaenoic-EPA, docosahexaenoic acids-DHA) using the minimum content of fish oil. Increased quantities of canola oil were associated with higher PUFA/SFA ratios. The presence of fish oil, even in small amounts, was effective in improving the nutritional quality of the mixture, showing lower n-6/n-3 ratios and significant levels of EPA and DHA. Thus, the optimal lipid mixture comprised of 20, 30 and 50% fish, olive and canola oils, respectively, which present PUFA/SFA (2.28) and n-6/n-3 (2.30) ratios within the recommendations of a healthy diet. Once the lipid mixture was optimized, components of the pre-emulsion used as fat replacer in the mortadella, such as lipid mixture (LM), sodium alginate (SA), and milk protein concentrate (PC), were studied to optimize hardness and springiness to target ranges of 13-16 N and 0.86-0.87, respectively. Results showed that springiness was not significantly affected by these variables. However, as the concentration of the three components increased, hardness decreased. Through the desirability function, the optimal proportions were 30% LM, 0.5% SA, and 0.5% PC. This study showed that the pre-emulsion decreases hardness of mortadella. In addition, response surface methodology was efficient to model lipid mixture and hardness, resulting in a product with improved texture and lipid quality.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
NASA Astrophysics Data System (ADS)
Lei, Ting; Zuend, Andreas; Cheng, Yafang; Su, Hang; Wang, Weigang; Ge, Maofa
2018-01-01
Hygroscopic growth factors of organic surrogate compounds representing biomass burning and mixed organic-inorganic aerosol particles exhibit variability during dehydration experiments depending on their chemical composition, which we observed using a hygroscopicity tandem differential mobility analyzer (HTDMA). We observed that levoglucosan and humic acid aerosol particles release water upon dehumidification in the range from 90 to 5 % relative humidity (RH). However, 4-Hydroxybenzoic acid aerosol particles remain in the solid state upon dehumidification and exhibit a small shrinking in size at higher RH compared to the dry size. For example, the measured growth factor of 4-hyroxybenzoic acid aerosol particles is ˜ 0.96 at 90 % RH. The measurements were accompanied by RH-dependent thermodynamic equilibrium calculations using the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) model and Extended Aerosol Inorganics Model (E-AIM), the Zdanovskii-Stokes-Robinson (ZSR) relation, and a fitted hygroscopicity expression. We observed several effects of organic components on the hygroscopicity behavior of mixtures containing ammonium sulfate (AS) in relation to the different mass fractions of organic compounds: (1) a shift of efflorescence relative humidity (ERH) of ammonium sulfate to higher RH due to the presence of 25 wt % levoglucosan in the mixture. (2) There is a distinct efflorescence transition at 25 % RH for mixtures consisting of 25 wt % of 4-hydroxybenzoic acid compared to the ERH at 35 % for organic-free AS particles. (3) There is indication for a liquid-to-solid phase transition of 4-hydroxybenzoic acid in the mixed particles during dehydration. (4) A humic acid component shows no significant effect on the efflorescence of AS in mixed aerosol particles. In addition, consideration of a composition-dependent degree of dissolution of crystallization AS (solid-liquid equilibrium) in the AIOMFAC and E-AIM models leads to a relatively good agreement between models and observed growth factors, as well as ERH of AS in the mixed system. The use of the ZSR relation leads to good agreement with measured diameter growth factors of aerosol particles containing humic acid and ammonium sulfate. Lastly, two distinct mixtures of organic surrogate compounds, including levoglucosan, 4-hydroxybenzoic acid, and humic acid, were used to represent the average water-soluble organic carbon (WSOC) fractions observed during the wet and dry seasons in the central Amazon Basin. A comparison of the organic fraction's hygroscopicity parameter for the simple mixtures, e.g., κ ≈ 0.12 to 0.15 for the wet-season mixture in the 90 to 40 % RH range, shows good agreement with field data for the wet season in the Amazon Basin (WSOC κ ≈ 0.14±0.06 at 90 % RH). This suggests that laboratory-generated mixtures containing organic surrogate compounds and ammonium sulfate can be used to mimic, in a simplified manner, the chemical composition of ambient aerosols from the Amazon Basin for the purpose of RH-dependent hygroscopicity studies.
Stability of faults with heterogeneous friction properties and effective normal stress
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul
2018-05-01
Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, Xiaomeng; Guo, Yongquan
2016-01-15
The structures and optical and electric properties of europium doped CuIn{sub 1−x}Eu{sub x}Te{sub 2} have been studied systematically using powder X-ray diffraction (XRD), scanning electron microscopy (SEM) with energy dispersive spectrum (EDS), ultraviolet and visible spectrophotometer (UV–vis), and standard four-probe method. The studies reveal that the minor europium doping into CuIn{sub 1−x}Eu{sub x}Te{sub 2} could still stabilize the chalcopyrite structure in a solid solution of x=0.1. The lattice parameters are going up with increasing the content of europium in CuIn{sub 1−x}Eu{sub x}Te{sub 2} due to the size effect at In site. The structural refinement confirms that Eu partly substitutes formore » In and occupies the 4b crystal position. SEM morphologies show that the europium doping into CuIn{sub 1−x}Eu{sub x}Te{sub 2} can fine the grains from the largely agglomerated state to the uniformly separated state. The electrical resistivities of single phase CuIn{sub 1−x}Eu{sub x}Te{sub 2} follow a mixture model of hopping conductivity and variable range hopping conductivity. The absorption band-gaps of CuIn{sub 1−x}Eu{sub x}Te{sub 2} at room temperature tend to increase with increasing Eu content. CuIn{sub 1−x}Eu{sub x}Te{sub 2} might be a good candidate for photovoltaic cell. - Graphical abstract: CuIn{sub 0.9}Eu{sub 0.1}Te{sub 2} follows a mixture of hopping conductivity and variable range hopping conductivity mechanism. - Highlights: • Novel europium doped CuIn{sub 1−x}Eu{sub x}Te{sub 2}. • Potential application for devices and solar cells. • A mixture of hopping and variable range hopping conductivity mechanism.« less
NASA Astrophysics Data System (ADS)
Hadley, Brian Christopher
This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Flame-conditioned turbulence modeling for reacting flows
NASA Astrophysics Data System (ADS)
Macart, Jonathan F.; Mueller, Michael E.
2017-11-01
Conventional approaches to turbulence modeling in reacting flows rely on unconditional averaging or filtering, that is, consideration of the momentum equations only in physical space, implicitly assuming that the flame only weakly affects the turbulence, aside from a variation in density. Conversely, for scalars, which are strongly coupled to the flame structure, their evolution equations are often projected onto a reduced-order manifold, that is, conditionally averaged or filtered, on a flame variable such as a mixture fraction or progress variable. Such approaches include Conditional Moment Closure (CMC) and related variants. However, recent observations from Direct Numerical Simulation (DNS) have indicated that the flame can strongly affect turbulence in premixed combustion at low Karlovitz number. In this work, a new approach to turbulence modeling for reacting flows is investigated in which conditionally averaged or filtered equations are evolved for the momentum. The conditionally-averaged equations for the velocity and its covariances are derived, and budgets are evaluated from DNS databases of turbulent premixed planar jet flames. The most important terms in these equations are identified, and preliminary closure models are proposed.
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.
Petric, Ivan; Mustafić, Nesib
2015-09-15
Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larkin, Andrew; Department of Statistics, Oregon State University; Superfund Research Center, Oregon State University
2013-03-01
Polycyclic aromatic hydrocarbons (PAHs) are present in the environment as complex mixtures with components that have diverse carcinogenic potencies and mostly unknown interactive effects. Non-additive PAH interactions have been observed in regulation of cytochrome P450 (CYP) gene expression in the CYP1 family. To better understand and predict biological effects of complex mixtures, such as environmental PAHs, an 11 gene input-1 gene output fuzzy neural network (FNN) was developed for predicting PAH-mediated perturbations of dermal Cyp1b1 transcription in mice. Input values were generalized using fuzzy logic into low, medium, and high fuzzy subsets, and sorted using k-means clustering to create Mamdanimore » logic functions for predicting Cyp1b1 mRNA expression. Model testing was performed with data from microarray analysis of skin samples from FVB/N mice treated with toluene (vehicle control), dibenzo[def,p]chrysene (DBC), benzo[a]pyrene (BaP), or 1 of 3 combinations of diesel particulate extract (DPE), coal tar extract (CTE) and cigarette smoke condensate (CSC) using leave-one-out cross-validation. Predictions were within 1 log{sub 2} fold change unit of microarray data, with the exception of the DBC treatment group, where the unexpected down-regulation of Cyp1b1 expression was predicted but did not reach statistical significance on the microarrays. Adding CTE to DPE was predicted to increase Cyp1b1 expression, whereas adding CSC to CTE and DPE was predicted to have no effect, in agreement with microarray results. The aryl hydrocarbon receptor repressor (Ahrr) was determined to be the most significant input variable for model predictions using back-propagation and normalization of FNN weights. - Highlights: ► Tested a model to predict PAH mixture-mediated changes in Cyp1b1 expression ► Quantitative predictions in agreement with microarrays for Cyp1b1 induction ► Unexpected difference in expression between DBC and other treatments predicted ► Model predictions for combining PAH mixtures in agreement with microarrays ► Predictions highly dependent on aryl hydrocarbon receptor repressor expression.« less
A fractal approach to dynamic inference and distribution analysis
van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.
2013-01-01
Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552
Barillot, Romain; Combes, Didier; Chevalier, Valérie; Fournier, Christian; Escobar-Gutiérrez, Abraham J.
2012-01-01
Background and aims Light interception is a key factor driving the functioning of wheat–pea intercrops. The sharing of light is related to the canopy structure, which results from the architectural parameters of the mixed species. In the present study, we characterized six contrasting pea genotypes and identified architectural parameters whose range of variability leads to various levels of light sharing within virtual wheat–pea mixtures. Methodology Virtual plants were derived from magnetic digitizations performed during the growing cycle in a greenhouse experiment. Plant mock-ups were used as inputs of a radiative transfer model in order to estimate light interception in virtual wheat–pea mixtures. The turbid medium approach, extended to well-mixed canopies, was used as a framework for assessing the effects of leaf area index (LAI) and mean leaf inclination on light sharing. Principal results Three groups of pea genotypes were distinguished: (i) early and leafy cultivars, (ii) late semi-leafless cultivars and (iii) low-development semi-leafless cultivars. Within open canopies, light sharing was well described by the turbid medium approach and was therefore determined by the architectural parameters that composed LAI and foliage inclination. When canopy closure started, the turbid medium approach was unable to properly infer light partitioning because of the vertical structure of the canopy. This was related to the architectural parameters that determine the height of pea genotypes. Light capture was therefore affected by the development of leaflets, number of branches and phytomers, as well as internode length. Conclusions This study provides information on pea architecture and identifies parameters whose variability can be used to drive light sharing within wheat–pea mixtures. These results could be used to build up the architecture of pea ideotypes adapted to multi-specific stands towards light competition. PMID:23240074
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
An olfactory cocktail party: figure-ground segregation of odorants in rodents.
Rokni, Dan; Hemmelder, Vivian; Kapoor, Vikrant; Murthy, Venkatesh N
2014-09-01
In odorant-rich environments, animals must be able to detect specific odorants of interest against variable backgrounds. However, studies have found that both humans and rodents are poor at analyzing the components of odorant mixtures, suggesting that olfaction is a synthetic sense in which mixtures are perceived holistically. We found that mice could be easily trained to detect target odorants embedded in unpredictable and variable mixtures. To relate the behavioral performance to neural representation, we imaged the responses of olfactory bulb glomeruli to individual odors in mice expressing the Ca(2+) indicator GCaMP3 in olfactory receptor neurons. The difficulty of segregating the target from the background depended strongly on the extent of overlap between the glomerular responses to target and background odors. Our study indicates that the olfactory system has powerful analytic abilities that are constrained by the limits of combinatorial neural representation of odorants at the level of the olfactory receptors.
Variable-temperature cryogenic trap for the separation of gas mixtures
NASA Technical Reports Server (NTRS)
Des Marais, D. J.
1978-01-01
The paper describes a continuous variable-temperature U-shaped cold trap which can both purify vacuum-line combustion products for subsequent stable isotopic analysis and isolate the methane and ethane constituents of natural gases. The canister containing the trap is submerged in liquid nitrogen, and, as the gas cools, the gas mixture components condense sequentially according to their relative vapor pressures. After the about 12 min required for the bottom of the trap to reach the liquid-nitrogen temperature, passage of electric current through the resistance wire wrapped around the tubing covering the U-trap permits distillation of successive gas components at optimal temperatures. Data on the separation achieved for two mixtures, the first being typical vacuum-line combustion products of geochemical samples such as rocks and the second being natural gas, are presented, and the thermal behavior and power consumption are reported.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Clustering and variable selection in the presence of mixed variable types and missing data.
Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D
2018-05-17
We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.
The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.
Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny
2018-04-16
We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.
NASA Astrophysics Data System (ADS)
Serventi, Giovanna; Carli, Cristian; Sgavetti, Maria
2015-07-01
Among the techniques to detect planet's mineralogical composition remote sensing, visible and near-infrared (VNIR) reflectance spectroscopy is a powerful tool, because crystal field absorption bands are related to particular transitional metals in well-defined crystal structures, e.g., Fe2+ in M1 and M2 sites of olivine (OL) or pyroxene (PX). Although OL, PX and their mixtures have been widely studied, plagioclase (PL), considered a spectroscopically transparent mineral, has been poorly analyzed. In this work we quantitatively investigate the influence of plagioclase absorption band on the absorption bands of Fe, Mg minerals using the Modified Gaussian Model - MGM (Sunshine, J.M. et al. [1990]. J. Geophys. Res. 95, 6955-6966). We consider three plagioclase compositions of varying FeO wt.% contents and five mafic end-members (1) 56% orthopyroxene and 44% clinopyroxene, (2) 28% olivine and 72% orthopyroxene, (3) 30% orthopyroxene and 70% olivine, (4) 100% olivine and (5) 100% orthopyroxene, at two different particle sizes. The spectral parameters considered here are: band depth, band center, band width, c0 (the continuum intercept) and c1 (the continuum offset). In particular, we show the variation of the plagioclase and composite (plagioclase-olivine) band spectral parameters versus the volumetric iron content related to the plagioclase abundance in mixtures. Generally, increasing the vol. FeO% due to the PL: (1) 1250 nm band deepens with linear trend in mixtures with pyroxenes, while it decreases in mixtures with olivine, with trend shifting from parabolic to linear increasing the olivine content in end-member; (2) 1250 nm band center moves towards longer wavelengths with linear trend in pyroxene-rich mixtures and parabolic trend in olivine-rich mixtures; and (3) 1250 nm band clearly widens with linear trend in olivine-free mixtures, while the widening is only slight in olivine-rich mixtures. We also outline how spectral parameters can be ambiguous leading to an incorrect mineralogical interpretation. Furthermore, we show the presence of an asymmetry of the plagioclase band towards the IR region, resolvable adding a Gaussian in the 1600-1800 nm spectral region.
NASA Astrophysics Data System (ADS)
Lee, Hsiang-He; Chen, Shu-Hua; Kleeman, Michael J.; Zhang, Hongliang; DeNero, Steven P.; Joe, David K.
2016-07-01
The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and was applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-D chemical variable (X, Z, Y, size bins, source types, species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and long-wave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into cloud condensation nuclei (CCN) at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Malzert-Fréon, A; Hennequin, D; Rault, S
2010-11-01
Lipidic nanoparticles (NP), formulated from a phase inversion temperature process, have been studied with chemometric techniques to emphasize the influence of the four major components (Solutol®, Labrasol®, Labrafac®, water) on their average diameter and their distribution in size. Typically, these NP present a monodisperse size lower than 200 nm, as determined by dynamic light scattering measurements. From the application of the partial least squares (PLS) regression technique to the experimental data collected during definition of the feasibility zone, it was established that NP present a core-shell structure where Labrasol® is well encapsulated and contributes to the structuring of the NP. Even if this solubility enhancer is regarded as a pure surfactant in the literature, it appears that the oil moieties of this macrogolglyceride mixture significantly influence its properties. Furthermore, results have shown that PLS technique can be also used for predictions of sizes for given relative proportions of components and it was established that from a mixture design, the quantitative mixture composition to use in order to reach a targeted size and a targeted polydispersity index (PDI) can be easily predicted. Hence, statistical models can be a useful tool to control and optimize the characteristics in size of NP. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
Landrum, Peter F; Chapman, Peter M; Neff, Jerry; Page, David S
2012-04-01
Experimental designs for evaluating complex mixture toxicity in aquatic environments can be highly variable and, if not appropriate, can produce and have produced data that are difficult or impossible to interpret accurately. We build on and synthesize recent critical reviews of mixture toxicity using lessons learned from 4 case studies, ranging from binary to more complex mixtures of primarily polycyclic aromatic hydrocarbons and petroleum hydrocarbons, to provide guidance for evaluating the aquatic toxicity of complex mixtures of organic chemicals. Two fundamental requirements include establishing a dose-response relationship and determining the causative agent (or agents) of any observed toxicity. Meeting these 2 requirements involves ensuring appropriate exposure conditions and measurement endpoints, considering modifying factors (e.g., test conditions, test organism life stages and feeding behavior, chemical transformations, mixture dilutions, sorbing phases), and correctly interpreting dose-response relationships. Specific recommendations are provided. Copyright © 2011 SETAC.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Performance on perceptual word identification is mediated by discrete states.
Swagman, April R; Province, Jordan M; Rouder, Jeffrey N
2015-02-01
We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne
2010-01-01
Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
Jammed Limit of Bijel Structure Formation
Welch, P. M.; Lee, M. N.; Parra-Vasquez, A. N. G.; ...
2017-11-02
Over the past decade, methods to control microstructure in heterogeneous mixtures by arresting spinodal decomposition via the addition of colloidal particles have led to an entirely new class of bicontinuous materials known as bijels. We present a new model for the development of these materials that yields to both numerical and analytical evaluation. This model reveals that a single dimensionless parameter that captures both chemical and environmental variables dictates the dynamics and ultimate structure formed in bijels. We also demonstrate that this parameter must fall within a fixed range in order for jamming to occur during spinodal decomposition, as wellmore » as show that known experimental trends for the characteristic domain sizes and time scales for formation are recovered by this model.« less
A multi agent model for the limit order book dynamics
NASA Astrophysics Data System (ADS)
Bartolozzi, M.
2010-11-01
In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, the market sentiment, which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents, is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.
Stability of the accelerated expansion in nonlinear electrodynamics
NASA Astrophysics Data System (ADS)
Sharif, M.; Mumtaz, Saadia
2017-02-01
This paper is devoted to the phase space analysis of an isotropic and homogeneous model of the universe by taking a noninteracting mixture of the electromagnetic and viscous radiating fluids whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. We establish an autonomous system of equations by introducing normalized dimensionless variables. In order to analyze the stability of the system, we find corresponding critical points for different values of the parameters. We also evaluate the power-law scale factor whose behavior indicates different phases of the universe in this model. It is concluded that the bulk viscosity as well as electromagnetic field enhances the stability of the accelerated expansion of the isotropic and homogeneous model of the universe.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Mixture Toxicity of Nickel and Microplastics with Different Functional Groups on Daphnia magna.
Kim, Dokyung; Chae, Yooeun; An, Youn-Joo
2017-11-07
In recent years, discarded plastic has become an increasingly prevalent pollutant in aquatic ecosystems. These plastic wastes decompose into microplastics, which pose not only a direct threat to aquatic organisms but also an indirect threat via adsorption of other aquatic pollutants. In this study, we investigated the toxicities of variable and fixed combinations of two types of microplastics [one coated with a carboxyl group (PS-COOH) and the other lacking this functional group (PS)] with the heavy metal nickel (Ni) on Daphnia magna and calculated mixture toxicity using a toxic unit model. We found that toxicity of Ni in combination with either of the two microplastics differed from that of Ni alone. Furthermore, in general, we observed that immobilization of D. magna exposed to Ni combined with PS-COOH was higher than that of D. magna exposed to Ni combined with PS. Collectively, the results of our study indicate that the toxic effects of microplastics and pollutants may vary depending on the specific properties of the pollutant and microplastic functional groups, and further research on the mixture toxicity of various combinations of microplastics and pollutants is warranted.
Experimental determination of useful resistance value during pasta dough kneading
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Martynova, T. G.; Skeeba, V. Yu; Kosilov, A. S.; Chernysheva, A. A.; Skeeba, P. Yu
2017-10-01
There is a large quantity of materials produced in the form of dry powder or low humidity granulated masses in the modern market, and there is a need to develop new manufacturing machinery and to renew the existing facilities involved in the production of various loose mixtures. One of the machinery upgrading tasks is enhancing its performance. In view of the fact that experimental research is not feasible in full-scale samples, an experimental installation was to be constructed. The article contains its kinematic scheme and the 3D model. The angle of the kneading blade location, the volume of the loose mixture, rotating frequency and the number of the work member double passes were chosen as variables to carry out the experiment. The technique of the experiment, which includes two stages for the rotary and reciprocating movement of the work member, was proposed. The results of the experimental data processing yield the correlations between the load characteristics of the mixer work member and the angle of the blade, the volume of the mixture and the work member rotating frequency, allowing for the recalculation of loads for this type machines.
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
NASA Technical Reports Server (NTRS)
Goldstein, D.; Magnotti, F.; Chinitz, W.
1983-01-01
Reaction rates in turbulent, reacting flows are reviewed. Assumed probability density functions (pdf) modeling of reaction rates is being investigated in relation to a three variable pdf employing a 'most likely pdf' model. Chemical kinetic mechanisms treating hydrogen air combustion is studied. Perfectly stirred reactor modeling of flame stabilizing recirculation regions was used to investigate the stable flame regions for silane, hydrogen, methane, and propane, and for certain mixtures thereof. It is concluded that in general, silane can be counted upon to stabilize flames only when the overall fuel air ratio is close to or greater than unity. For lean flames, silane may tend to destabilize the flame. Other factors favoring stable flames are high initial reactant temperatures and system pressure.
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures
NASA Astrophysics Data System (ADS)
Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.
2017-10-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.
ERIC Educational Resources Information Center
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk
2008-01-01
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah
2015-11-05
It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.
Bak, N; Ebdrup, B H; Oranje, B; Fagerlund, B; Jensen, M H; Düring, S W; Nielsen, M Ø; Glenthøj, B Y; Hansen, L K
2017-01-01
Deficits in information processing and cognition are among the most robust findings in schizophrenia patients. Previous efforts to translate group-level deficits into clinically relevant and individualized information have, however, been non-successful, which is possibly explained by biologically different disease subgroups. We applied machine learning algorithms on measures of electrophysiology and cognition to identify potential subgroups of schizophrenia. Next, we explored subgroup differences regarding treatment response. Sixty-six antipsychotic-naive first-episode schizophrenia patients and sixty-five healthy controls underwent extensive electrophysiological and neurocognitive test batteries. Patients were assessed on the Positive and Negative Syndrome Scale (PANSS) before and after 6 weeks of monotherapy with the relatively selective D2 receptor antagonist, amisulpride (280.3±159 mg per day). A reduced principal component space based on 19 electrophysiological variables and 26 cognitive variables was used as input for a Gaussian mixture model to identify subgroups of patients. With support vector machines, we explored the relation between PANSS subscores and the identified subgroups. We identified two statistically distinct subgroups of patients. We found no significant baseline psychopathological differences between these subgroups, but the effect of treatment in the groups was predicted with an accuracy of 74.3% (P=0.003). In conclusion, electrophysiology and cognition data may be used to classify subgroups of schizophrenia patients. The two distinct subgroups, which we identified, were psychopathologically inseparable before treatment, yet their response to dopaminergic blockade was predicted with significant accuracy. This proof of principle encourages further endeavors to apply data-driven, multivariate and multimodal models to facilitate progress from symptom-based psychiatry toward individualized treatment regimens. PMID:28398342
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Sang D.; Borodin, Oleg; Seo, D. M.
Electrolytes with the salt lithium bis(fluorosulfonyl)imide (LiFSI) have been evaluated relative to comparable electrolytes with other lithium salts. Acetonitrile (AN) has been used as a model electrolyte solvent. The information obtained from the thermal phase behavior, solvation/ionic association interactions, quantum chemical (QC) calculations and molecular dynamics (MD) simulations (with an APPLE&P many-body polarizable force field for the LiFSI salt) of the (AN)n-LiFSI mixtures provides detailed insight into the coordination interactions of the FSI- anions and the wide variability noted in the electrolyte transport property (i.e., viscosity and ionic conductivity).
Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S
2016-09-30
A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
NASA Astrophysics Data System (ADS)
Iqbal, S.; Benim, A. C.; Fischer, S.; Joos, F.; Kluβ, D.; Wiedermann, A.
2016-10-01
Turbulent reacting flows in a generic swirl gas turbine combustor model are investigated both numerically and experimentally. In the investigation, an emphasis is placed upon the external flue gas recirculation, which is a promising technology for increasing the efficiency of the carbon capture and storage process, which, however, can change the combustion behaviour significantly. A further emphasis is placed upon the investigation of alternative fuels such as biogas and syngas in comparison to the conventional natural gas. Flames are also investigated numerically using the open source CFD software OpenFOAM. In the numerical simulations, a laminar flamelet model based on mixture fraction and reaction progress variable is adopted. As turbulence model, the SST model is used within a URANS concept. Computational results are compared with the experimental data, where a fair agreement is observed.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Brody, Gene H.; Lei, Man-Kit; Chae, David H.; Yu, Tianyi; Kogan, Steven M.; Beach, Steven R. H.
2013-01-01
This study was designed to examine the prospective relations of perceived racial discrimination with allostatic load (AL), along with a possible buffer of the association. A sample of 331 African Americans in the rural South provided assessments of perceived discrimination from ages 16 to 18 years. When youths were 18, caregivers reported parental emotional support, and youths assessed peer emotional support. AL and potential confounder variables were assessed when youths were 20. Latent Growth Mixture Modeling identified two perceived discrimination classes: high and stable and low and increasing. Adolescents in the high and stable class evinced heightened AL even with confounder variables controlled. The racial discrimination to AL link was not significant for young adults who received high emotional support. PMID:24673162
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Which metric of ambient ozone to predict daily mortality?
NASA Astrophysics Data System (ADS)
Moshammer, Hanns; Hutter, Hans-Peter; Kundi, Michael
2013-02-01
It is well known that ozone concentration is associated with daily cause specific mortality. But which ozone metric is the best predictor of the daily variability in mortality? We performed a time series analysis on daily deaths (all causes, respiratory and cardiovascular causes as well as death in elderly 65+) in Vienna for the years 1991-2009. We controlled for seasonal and long term trend, day of the week, temperature and humidity using the same basic model for all pollutant metrics. We found model fit was best for same day variability of ozone concentration (calculated as the difference between daily hourly maximum and minimum) and hourly maximum. Of these the variability displayed a more linear dose-response function. Maximum 8 h moving average and daily mean value performed not so well. Nitrogen dioxide (daily mean) in comparison performed better when previous day values were assessed. Same day ozone and previous day nitrogen dioxide effect estimates did not confound each other. Variability in daily ozone levels or peak ozone levels seem to be a better proxy of a complex reactive secondary pollutant mixture than daily average ozone levels in the Middle European setting. If this finding is confirmed this would have implications for the setting of legally binding limit values.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian
2009-07-15
Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less
Some issues in the simulation of two-phase flows: The relative velocity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräbel, J.; Hensel, S.; Ueberholz, P.
In this paper we compare numerical approximations for solving the Riemann problem for a hyperbolic two-phase flow model in two-dimensional space. The model is based on mixture parameters of state where the relative velocity between the two-phase systems is taken into account. This relative velocity appears as a main discontinuous flow variable through the complete wave structure and cannot be recovered correctly by some numerical techniques when simulating the associated Riemann problem. Simulations are validated by comparing the results of the numerical calculation qualitatively with OpenFOAM software. Simulations also indicate that OpenFOAM is unable to resolve the relative velocity associatedmore » with the Riemann problem.« less
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
NASA Astrophysics Data System (ADS)
Hu, Yong; Olguin, Hernan; Gutheil, Eva
2017-05-01
A spray flamelet/progress variable approach is developed for use in spray combustion with partly pre-vaporised liquid fuel, where a laminar spray flamelet library accounts for evaporation within the laminar flame structures. For this purpose, the standard spray flamelet formulation for pure evaporating liquid fuel and oxidiser is extended by a chemical reaction progress variable in both the turbulent spray flame model and the laminar spray flame structures, in order to account for the effect of pre-vaporised liquid fuel for instance through use of a pilot flame. This new approach is combined with a transported joint probability density function (PDF) method for the simulation of a turbulent piloted ethanol/air spray flame, and the extension requires the formulation of a joint three-variate PDF depending on the gas phase mixture fraction, the chemical reaction progress variable, and gas enthalpy. The molecular mixing is modelled with the extended interaction-by-exchange-with-the-mean (IEM) model, where source terms account for spray evaporation and heat exchange due to evaporation as well as the chemical reaction rate for the chemical reaction progress variable. This is the first formulation using a spray flamelet model considering both evaporation and partly pre-vaporised liquid fuel within the laminar spray flamelets. Results with this new formulation show good agreement with the experimental data provided by A.R. Masri, Sydney, Australia. The analysis of the Lagrangian statistics of the gas temperature and the OH mass fraction indicates that partially premixed combustion prevails near the nozzle exit of the spray, whereas further downstream, the non-premixed flame is promoted towards the inner rich-side of the spray jet since the pilot flame heats up the premixed inner spray zone. In summary, the simulation with the new formulation considering the reaction progress variable shows good performance, greatly improving the standard formulation, and it provides new insight into the local structure of this complex spray flame.
Keever, Allison; McGowan, Conor P.; Ditchkoff, Stephen S.; Acker, S.A.; Grand, James B.; Newbolt, Chad H.
2017-01-01
Automated cameras have become increasingly common for monitoring wildlife populations and estimating abundance. Most analytical methods, however, fail to account for incomplete and variable detection probabilities, which biases abundance estimates. Methods which do account for detection have not been thoroughly tested, and those that have been tested were compared to other methods of abundance estimation. The goal of this study was to evaluate the accuracy and effectiveness of the N-mixture method, which explicitly incorporates detection probability, to monitor white-tailed deer (Odocoileus virginianus) by using camera surveys and a known, marked population to collect data and estimate abundance. Motion-triggered camera surveys were conducted at Auburn University’s deer research facility in 2010. Abundance estimates were generated using N-mixture models and compared to the known number of marked deer in the population. We compared abundance estimates generated from a decreasing number of survey days used in analysis and by time periods (DAY, NIGHT, SUNRISE, SUNSET, CREPUSCULAR, ALL TIMES). Accurate abundance estimates were generated using 24 h of data and nighttime only data. Accuracy of abundance estimates increased with increasing number of survey days until day 5, and there was no improvement with additional data. This suggests that, for our system, 5-day camera surveys conducted at night were adequate for abundance estimation and population monitoring. Further, our study demonstrates that camera surveys and N-mixture models may be a highly effective method for estimation and monitoring of ungulate populations.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrated presentation of ecological risk from multiple stressors
NASA Astrophysics Data System (ADS)
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-10-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Integrated presentation of ecological risk from multiple stressors.
Goussen, Benoit; Price, Oliver R; Rendal, Cecilie; Ashauer, Roman
2016-10-26
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Chaplin, M. J.; Vickery, C. M.; Simon, S.; Davis, L.; Denyer, M.; Lockey, R.; Stack, M. J.; O'Connor, M. J.; Bishop, K.; Gough, K. C.; Maddison, B. C.; Thorne, L.; Spiropoulos, J.
2015-01-01
Current European Commission (EC) surveillance regulations require discriminatory testing of all transmissible spongiform encephalopathy (TSE)-positive small ruminant (SR) samples in order to classify them as bovine spongiform encephalopathy (BSE) or non-BSE. This requires a range of tests, including characterization by bioassay in mouse models. Since 2005, naturally occurring BSE has been identified in two goats. It has also been demonstrated that more than one distinct TSE strain can coinfect a single animal in natural field situations. This study assesses the ability of the statutory methods as listed in the regulation to identify BSE in a blinded series of brain samples, in which ovine BSE and distinct isolates of scrapie are mixed at various ratios ranging from 99% to 1%. Additionally, these current statutory tests were compared with a new in vitro discriminatory method, which uses serial protein misfolding cyclic amplification (sPMCA). Western blotting consistently detected 50% BSE within a mixture, but at higher dilutions it had variable success. The enzyme-linked immunosorbent assay (ELISA) method consistently detected BSE only when it was present as 99% of the mixture, with variable success at higher dilutions. Bioassay and sPMCA reported BSE in all samples where it was present, down to 1%. sPMCA also consistently detected the presence of BSE in mixtures at 0.1%. While bioassay is the only validated method that allows comprehensive phenotypic characterization of an unknown TSE isolate, the sPMCA assay appears to offer a fast and cost-effective alternative for the screening of unknown isolates when the purpose of the investigation was solely to determine the presence or absence of BSE. PMID:26041899
Variable Weight Fractional Collisions for Multiple Species Mixtures
2017-08-28
DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in
Process Dissociation and Mixture Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2008-01-01
The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…
ERIC Educational Resources Information Center
Li, Ming; Harring, Jeffrey R.
2017-01-01
Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…
ERIC Educational Resources Information Center
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Prediction of biodegradability of aromatics in water using QSAR modeling.
Cvetnic, Matija; Juretic Perisic, Daria; Kovacic, Marin; Kusic, Hrvoje; Dermadi, Jasna; Horvat, Sanja; Bolanca, Tomislav; Marin, Vedrana; Karamanis, Panaghiotis; Loncaric Bozic, Ana
2017-05-01
The study was aimed at developing models for predicting the biodegradability of aromatic water pollutants. For that purpose, 36 single-benzene ring compounds, with different type, number and position of substituents, were used. The biodegradability was estimated according to the ratio of the biochemical (BOD 5 ) and chemical (COD) oxygen demand values determined for parent compounds ((BOD 5 /COD) 0 ), as well as for their reaction mixtures in half-life achieved by UV-C/H 2 O 2 process ((BOD 5 /COD) t1/2 ). The models correlating biodegradability and molecular structure characteristics of studied pollutants were derived using quantitative structure-activity relationship (QSAR) principles and tools. Upon derivation of the models and calibration on the training and subsequent testing on the test set, 3- and 5-variable models were selected as the most predictive for (BOD 5 /COD) 0 and (BOD 5 /COD) t1/2 , respectively, according to the values of statistical parameters R 2 and Q 2 . Hence, 3-variable model predicting (BOD 5 /COD) 0 possessed R 2 =0.863 and Q 2 =0.799 for training set, and R 2 =0.710 for test set, while 5-variable model predicting (BOD 5 /COD) 1/2 possessed R 2 =0.886 and Q 2 =0.788 for training set, and R 2 =0.564 for test set. The selected models are interpretable and transparent, reflecting key structural features that influence targeted biodegradability and can be correlated with the degradation mechanisms of studied compounds by UV-C/H 2 O 2 . Copyright © 2017 Elsevier Inc. All rights reserved.
Analysis of human scream and its impact on text-independent speaker verification.
Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid
2017-04-01
Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Love, Tanzy Mt; Thurston, Sally W; Davidson, Philip W
2017-04-01
The Seychelles Child Development Study is a research project with the objective of examining associations between prenatal exposure to low doses of methylmercury from maternal fish consumption and children's developmental outcomes. Whether methylmercury has neurotoxic effects at low doses remains unclear and recommendations for pregnant women and children to reduce fish intake may prevent a substantial number of people from receiving sufficient nutrients that are abundant in fish. The primary findings of the Seychelles Child Development Study are inconsistent with adverse associations between methylmercury from fish consumption and neurodevelopmental outcomes. However, whether there are subpopulations of children who are particularly sensitive to this diet is an open question. Secondary analysis from this study found significant interactions between prenatal methylmercury levels and both caregiver IQ and income on 19-month IQ. These results are sensitive to the categories chosen for these covariates and are difficult to interpret collectively. In this paper, we estimate effect modification of the association between prenatal methylmercury exposure and 19-month IQ using a general formulation of mixture regression. Our mixture regression model creates a latent categorical group membership variable which interacts with methylmercury in predicting the outcome. We also fit the same outcome model when in addition the latent variable is assumed to be a parametric function of three distinct socioeconomic measures. Bayesian methods allow group membership and the regression coefficients to be estimated simultaneously and our approach yields a principled choice of the number of distinct subpopulations. The results show three groups with different response patterns between prenatal methylmercury exposure and 19-month IQ in this population.
Denlinger, R.P.; Iverson, R.M.
2001-01-01
Numerical solutions of the equations describing flow of variably fluidized Coulomb mixtures predict key features of dry granular avalanches and water-saturated debris flows measured in physical experiments. These features include time-dependent speeds, depths, and widths of flows as well as the geometry of resulting deposits. Threedimensional (3-D) boundary surfaces strongly influence flow dynamics because transverse shearing and cross-stream momentum transport occur where topography obstructs or redirects motion. Consequent energy dissipation can cause local deceleration and deposition, even on steep slopes. Velocities of surge fronts and other discontinuities that develop as flows cross 3-D terrain are predicted accurately by using a Riemann solution algorithm. The algorithm employs a gravity wave speed that accounts for different intensities of lateral stress transfer in regions of extending and compressing flow and in regions with different degrees of fluidization. Field observations and experiments indicate that flows in which fluid plays a significant role typically have high-friction margins with weaker interiors partly fluidized by pore pressure. Interaction of the strong perimeter and weak interior produces relatively steep-sided, flat-topped deposits. To simulate these effects, we compute pore pressure distributions using an advection-diffusion model with enhanced diffusivity near flow margins. Although challenges remain in evaluating pore pressure distributions in diverse geophysical flows, Riemann solutions of the depthaveraged 3-D Coulomb mixture equations provide a powerful tool for interpreting and predicting flow behavior. They provide a means of modeling debris flows, rock avalanches, pyroclastic flows, and related phenomena without invoking and calibrating Theological parameters that have questionable physical significance.
Rafal Podlaski; Francis Roesch
2014-01-01
In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...
A general mixture model and its application to coastal sandbar migration simulation
NASA Astrophysics Data System (ADS)
Liang, Lixin; Yu, Xiping
2017-04-01
A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es
2013-06-01
Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less
NASA Astrophysics Data System (ADS)
Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin
2011-08-01
In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
Patino, Reynaldo; VanLandeghem, Matthew M.; Goodbred, Steven L.; Orsak, Erik; Jenkins, Jill A.; Echols, Kathy R.; Rosen, Michael R.; Torres, Leticia
2015-01-01
Adult male Common Carp were sampled in 2007/08 over a full reproductive cycle at Lake Mead National Recreation Area. Sites sampled included a stream dominated by treated wastewater effluent, a lake basin receiving the streamflow, an upstream lake basin (reference), and a site below Hoover Dam. Individual body burdens for 252 contaminants were measured, and biological variables assessed included physiological [plasma vitellogenin (VTG), estradiol-17β (E2), 11-ketotestosterone (11KT)] and organ [gonadosomatic index (GSI)] endpoints. Patterns in contaminant composition and biological condition were determined by Principal Component Analysis, and their associations modeled by Principal Component Regression. Three spatially distinct but temporally stable gradients of contaminant distribution were recognized: a contaminant mixture typical of wastewaters (PBDEs, methyl triclosan, galaxolide), PCBs, and DDTs. Two spatiotemporally variable patterns of biological condition were recognized: a primary pattern consisting of reproductive condition variables (11KT, E2, GSI), and a secondary pattern including general condition traits (condition factor, hematocrit, fork length). VTG was low in all fish, indicating low estrogenic activity of water at all sites. Wastewater contaminants associated negatively with GSI, 11KT and E2; PCBs associated negatively with GSI and 11KT; and DDTs associated positively with GSI and 11KT. Regression of GSI on sex steroids revealed a novel, nonlinear association between these variables. Inclusion of sex steroids in the GSI regression on contaminants rendered wastewater contaminants nonsignificant in the model and reduced the influence of PCBs and DDTs. Thus, the influence of contaminants on GSI may have been partially driven by organismal modes-of-action that include changes in sex steroid production. The positive association of DDTs with 11KT and GSI suggests that lifetime, sub-lethal exposures to DDTs have effects on male carp opposite of those reported by studies where exposure concentrations were relatively high. Lastly, this study highlighted advantages of multivariate/multiple regression approaches for exploring associations between complex contaminant mixtures and gradients and reproductive condition in wild fishes.
Patiño, Reynaldo; VanLandeghem, Matthew M; Goodbred, Steven L; Orsak, Erik; Jenkins, Jill A; Echols, Kathy; Rosen, Michael R; Torres, Leticia
2015-08-01
Adult male Common Carp were sampled in 2007/08 over a full reproductive cycle at Lake Mead National Recreation Area. Sites sampled included a stream dominated by treated wastewater effluent, a lake basin receiving the streamflow, an upstream lake basin (reference), and a site below Hoover Dam. Individual body burdens for 252 contaminants were measured, and biological variables assessed included physiological [plasma vitellogenin (VTG), estradiol-17β (E2), 11-ketotestosterone (11KT)] and organ [gonadosomatic index (GSI)] endpoints. Patterns in contaminant composition and biological condition were determined by Principal Component Analysis, and their associations modeled by Principal Component Regression. Three spatially distinct but temporally stable gradients of contaminant distribution were recognized: a contaminant mixture typical of wastewaters (PBDEs, methyl triclosan, galaxolide), PCBs, and DDTs. Two spatiotemporally variable patterns of biological condition were recognized: a primary pattern consisting of reproductive condition variables (11KT, E2, GSI), and a secondary pattern including general condition traits (condition factor, hematocrit, fork length). VTG was low in all fish, indicating low estrogenic activity of water at all sites. Wastewater contaminants associated negatively with GSI, 11KT and E2; PCBs associated negatively with GSI and 11KT; and DDTs associated positively with GSI and 11KT. Regression of GSI on sex steroids revealed a novel, nonlinear association between these variables. Inclusion of sex steroids in the GSI regression on contaminants rendered wastewater contaminants nonsignificant in the model and reduced the influence of PCBs and DDTs. Thus, the influence of contaminants on GSI may have been partially driven by organismal modes-of-action that include changes in sex steroid production. The positive association of DDTs with 11KT and GSI suggests that lifetime, sub-lethal exposures to DDTs have effects on male carp opposite of those reported by studies where exposure concentrations were relatively high. Lastly, this study highlighted advantages of multivariate/multiple regression approaches for exploring associations between complex contaminant mixtures and gradients and reproductive condition in wild fishes. Published by Elsevier Inc.
Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc
2014-09-25
There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.
NASA Astrophysics Data System (ADS)
Harris, Jennifer; Grindrod, Peter
2017-04-01
At present, martian meteorites represent the only samples of Mars available for study in terrestrial laboratories. However, these samples have never been definitively tied to source locations on Mars, meaning that the fundamental geological context is missing. The goal of this work is to link the bulk mineralogical analyses of martian meteorites to the surface geology of Mars through spectral mixture analysis of hyperspectral imagery. Hapke radiation transfer modelling has been shown to provide accurate (within 5 - 10% absolute error) mineral abundance values from laboratory derived hyperspectral measurements of binary [1] and ternary [2] mixtures of plagioclase, pyroxene and olivine. These three minerals form the vast bulk of the SNC meteorites [3] and the bedrock of the Amazonian provinces on Mars that are inferred to be the source regions for these meteorites based on isotopic aging. Spectral unmixing through the Hapke model could be used to quantitatively analyse the Martian surface and pinpoint the exact craters from which the SNC meteorites originated. However the Hapke model is complex with numerous variables, many of which are determinable in laboratory conditions but not from remote measurements of a planetary surface. Using binary and tertiary spectral mixtures and martian meteorite spectra from the RELAB spectral library, the accuracy of Hapke abundance estimation is investigated in the face of increasing constraints and simplifications to simulate CRISM data. Constraints and simplifications include reduced spectral resolution, additional noise, unknown endmembers and unknown particle physical characteristics. CRISM operates in two spectral resolutions, the Full Resolution Targeted (FRT) with which it has imaged approximately 2% of the martian surface, and the lower spectral resolution MultiSpectral Survey mode (MSP) with which it has covered the vast majority of the surface. On resampling the RELAB spectral mixtures to these two wavelength ranges it was found that with the lower spectral resolution the Hapke abundance results were just as accurate (within 7% absolute error) as with the higher resolution. Further results taking into account additional noise from both instrument and atmospheric sources and the potential presence of minor amounts of accessory minerals, and the selection of appropriate spectral endmembers where the exact endmembers present are unknown shall be presented. References [1] Mustard, J. F., Pieters, C. M., Quantitative abundance estimates from bidirectional reflectance measurements, Journal of Geophysical Research, Vol. 92, B4, E617 - E626, 1987 [2] Li, S., Milliken, R. E., Estimating the modal mineralogy of eucrite and diogenite meteorites using visible-near infrared reflectance spectroscopy, Meteoritics and Planetary Science, Vol. 50, 11, 1821 - 1850, 2015 [3] Hutchinson, R., Meteorites: A petrologic, chemical and isotopic synthesis, Cambridge University Press, 2004
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
To kill a kangaroo: understanding the decision to pursue high-risk/high-gain resources.
Jones, James Holland; Bird, Rebecca Bliege; Bird, Douglas W
2013-09-22
In this paper, we attempt to understand hunter-gatherer foraging decisions about prey that vary in both the mean and variance of energy return using an expected utility framework. We show that for skewed distributions of energetic returns, the standard linear variance discounting (LVD) model for risk-sensitive foraging can produce quite misleading results. In addition to creating difficulties for the LVD model, the skewed distributions characteristic of hunting returns create challenges for estimating probability distribution functions required for expected utility. We present a solution using a two-component finite mixture model for foraging returns. We then use detailed foraging returns data based on focal follows of individual hunters in Western Australia hunting for high-risk/high-gain (hill kangaroo) and relatively low-risk/low-gain (sand monitor) prey. Using probability densities for the two resources estimated from the mixture models, combined with theoretically sensible utility curves characterized by diminishing marginal utility for the highest returns, we find that the expected utility of the sand monitors greatly exceeds that of kangaroos despite the fact that the mean energy return for kangaroos is nearly twice as large as that for sand monitors. We conclude that the decision to hunt hill kangaroos does not arise simply as part of an energetic utility-maximization strategy and that additional social, political or symbolic benefits must accrue to hunters of this highly variable prey.
Fast Algorithms for Estimating Mixture Parameters
1989-08-30
The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National
Honey-Based Mixtures Used in Home Medicine by Nonindigenous Population of Misiones, Argentina
Kujawska, Monika; Zamudio, Fernando; Hilgert, Norma I.
2012-01-01
Honey-based mixtures used in home medicine by nonindigenous population of Misiones, Argentina. Medicinal mixtures are an underinvestigated issue in ethnomedical literature concerning Misiones, one of the most bioculturally diverse province of Argentina. The new culturally sensitive politics of the Provincial Health System is a response to cultural practices based on the medicinal use of plant and animal products in the home medicine of the local population. Honey-based medicinal formulas were investigated through interviews with 39 farmers of mixed cultural (Criollos) and Polish origins in northern Misiones. Fifty plant species and 8 animal products are employed in honey-based medicines. Plants are the most dominant and variable elements of mixtures. Most of the mixtures are food medicines. The role of honey in more than 90% of formulas is perceived as therapeutic. The ecological distribution of taxa and the cultural aspects of mixtures are discussed, particularly the European and American influences that have shaped the character of multispecies medicinal recipes. PMID:22315632
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
The influence of mixed tree plantations on the nutrition of individual species: a review.
Richards, Anna E; Forrester, David I; Bauhus, Jürgen; Scherer-Lorenzen, Michael
2010-09-01
Productivity of tree plantations is a function of the supply, capture and efficiency of use of resources, as outlined in the Production Ecology Equation. Species interactions in mixed-species stands can influence each of these variables. The importance of resource-use efficiency in determining forest productivity has been clearly demonstrated in monocultures; however, substantial knowledge gaps remain for mixtures. This review examines how the physiology and morphology of a given species can vary depending on whether it grows in a mixture or monoculture. We outline how physiological and morphological shifts within species, resulting from interactions in mixtures, may influence the three variables of the Production Ecology Equation, with an emphasis on nutrient resources [nitrogen (N) and phosphorus (P)]. These include (i) resource availability, including soil nutrient mineralization, N₂ fixation and litter decomposition; (ii) proportion of resources captured, resulting from shifts in spatial, temporal and chemical patterns of root dynamics; (iii) resource-use efficiency. We found that more than 50% of mixed-species studies report a shift to greater above-ground nutrient content of species grown in mixtures compared to monocultures, indicating an increase in the proportion of resources captured from a site. Secondly, a meta-analysis showed that foliar N concentrations significantly increased for a given species in a mixture containing N₂-fixing species, compared to a monoculture, suggesting higher rates of photosynthesis and greater resource-use efficiency. Significant shifts in N- and P-use efficiencies of a given species, when grown in a mixture compared to a monoculture, occurred in over 65% of studies where resource-use efficiency could be calculated. Such shifts can result from changes in canopy photosynthetic capacities, changes in carbon allocation or changes to foliar nutrient residence times of species in a mixture. We recommend that future research focus on individual species' changes, particularly with respect to resource-use efficiency (including nutrients, water and light), when trees are grown in mixtures compared to monocultures. A better understanding of processes responsible for changes to tree productivity in mixed-species tree plantations can improve species, and within-species, selection so that the long-term outcome of mixtures is more predictable.
Utilization of Variable Consumption Biofuel in Diesel Engine
NASA Astrophysics Data System (ADS)
Markov, V. A.; Kamaltdinov, V. G.; Savastenko, A. A.
2018-01-01
The depletion of oil fields and the deteriorating environmental situation leads to the need for the search of new alternative sources of energy. Actuality of the article due to the need for greater use of the alternative fuels in internal combustion engines is necessary. The advantages of vegetables origin fuels using as engine fuels are shown. Diesel engine operation on mixtures of petroleum diesel and rapeseed oil is researched. A fuel delivery system of mixture biofuel with a control system of the fuel compound is considered. The results of the system experimental researches of fuel delivery of mixture biofuel are led.
Advertising and Irreversible Opinion Spreading in Complex Social Networks
NASA Astrophysics Data System (ADS)
Candia, Julián
Irreversible opinion spreading phenomena are studied on small-world and scale-free networks by means of the magnetic Eden model, a nonequilibrium kinetic model for the growth of binary mixtures in contact with a thermal bath. In this model, the opinion of an individual is affected by those of their acquaintances, but opinion changes (analogous to spin flips in an Ising-like model) are not allowed. We focus on the influence of advertising, which is represented by external magnetic fields. The interplay and competition between temperature and fields lead to order-disorder transitions, which are found to also depend on the link density and the topology of the complex network substrate. The effects of advertising campaigns with variable duration, as well as the best cost-effective strategies to achieve consensus within different scenarios, are also discussed.
An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models
ERIC Educational Resources Information Center
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol
2016-01-01
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
ERIC Educational Resources Information Center
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
Effects of three veterinary antibiotics and their binary mixtures on two green alga species.
Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A
2018-03-01
The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical Analysis of an Impinging Jet Reactor for the CVD and Gas-Phase Nucleation of Titania
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.; Stewart, Gregory D.; Collins, Joshua; Rosner, Daniel E.
1994-01-01
We model a cold-wall atmospheric pressure impinging jet reactor to study the CVD and gas-phase nucleation of TiO2 from a titanium tetra-iso-propoxide (TTIP)/oxygen dilute source gas mixture in nitrogen. The mathematical model uses the computational code FIDAP and complements our recent asymptotic theory for high activation energy gas-phase reactions in thin chemically reacting sublayers. The numerical predictions highlight deviations from ideality in various regions inside the experimental reactor. Model predictions of deposition rates and the onset of gas-phase nucleation compare favorably with experiments. Although variable property effects on deposition rates are not significant (approximately 11 percent at 1000 K), the reduction rates due to Soret transport is substantial (approximately 75 percent at 1000 K).
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Bras, R. L.; Tucker, G. E.
2003-04-01
An alluvial channel's slope and bed texture are intimately linked. Along with fluvial discharge, these variables are the key players in setting alluvial transport rates. We know that both channel slope and mean grain size usually decrease downstream, but how sensitive are these variables to tectonic changes? Are basin concavity and downstream fining drastically disrupted during transitions from one tectonic regime to another? We explore these questions using the CHILD numerical landscape evolution model to generate alluvial networks composed of a sand and gravel mixture. The steady-state and transient patterns of both channel slope and sediment texture are investigated. The steady-state patterns in slope and sediment texture are verified independently by solving the erosion equations under equilibrium conditions, i.e. the case when the erosion rate is equal to the uplift rate across the entire landscape. The inclusion of surface texture as a free parameter (as opposed to just channel slope) leads to some surprising results. In all cases, an increase in uplift rate results in channel beds which are finer at equilibrium (for a given drainage area). Higher uplift rates imply larger equilibrium transport rates; this leads to finer channels that have a smaller critical shear stress to entrain material, and therefore more material can be transported for a given discharge (and channel slope). Changes in equilibrium slopes are less intuitive. An increase in uplift rates can cause channel slopes to increase, remain the same, or decrease, depending on model parameter values. In the surprising case in which equilibrium channel slopes decrease with increasing uplift rates, we suggest that surface texture changes more than compensate for the required increase in transport rates, causing channel slopes to decrease. These results highlight the important role of sediment grain size in determining transport rates and caution us against ignoring this important variable in fluvial networks.
Creissen, Henry E.; Jorgensen, Tove H.; Brown, James K.M.
2016-01-01
Crop variety mixtures have the potential to increase yield stability in highly variable and unpredictable environments, yet knowledge of the specific mechanisms underlying enhanced yield stability has been limited. Ecological processes in genetically diverse crops were investigated by conducting field trials with winter barley varieties (Hordeum vulgare), grown as monocultures or as three-way mixtures in fungicide treated and untreated plots at three sites. Mixtures achieved yields comparable to the best performing monocultures whilst enhancing yield stability despite being subject to multiple predicted and unpredicted abiotic and biotic stresses including brown rust (Puccinia hordei) and lodging. There was compensation through competitive release because the most competitive variety overyielded in mixtures thereby compensating for less competitive varieties. Facilitation was also identified as an important ecological process within mixtures by reducing lodging. This study indicates that crop varietal mixtures have the capacity to stabilise productivity even when environmental conditions and stresses are not predicted in advance. Varietal mixtures provide a means of increasing crop genetic diversity without the need for extensive breeding efforts. They may confer enhanced resilience to environmental stresses and thus be a desirable component of future cropping systems for sustainable arable farming. PMID:27375312
Quality improvement of melt extruded laminar systems using mixture design.
Hasa, D; Perissutti, B; Campisi, B; Grassi, M; Grabnar, I; Golob, S; Mian, M; Voinovich, D
2015-07-30
This study investigates the application of melt extrusion for the development of an oral retard formulation with a precise drug release over time. Since adjusting the formulation appears to be of the utmost importance in achieving the desired drug release patterns, different formulations of laminar extrudates were prepared according to the principles of Experimental Design, using a design for mixtures to assess the influence of formulation composition on the in vitro drug release from the extrudates after 1h and after 8h. The effect of each component on the two response variables was also studied. Ternary mixtures of theophylline (model drug), monohydrate lactose and microcrystalline wax (as thermoplastic binder) were extruded in a lab scale vertical ram extruder in absence of solvents at a temperature below the melting point of the binder (so that the crystalline state of the drug could be maintained), through a rectangular die to obtain suitable laminar systems. Thanks to the desirability approach and a reliability study for ensuring the quality of the formulation, a very restricted optimal zone was defined within the experimental domain. Among the mixture components, the variation of microcrystalline wax content played the most significant role in overall influence on the in vitro drug release. The formulation theophylline:lactose:wax, 57:14:29 (by weight), selected based on the desirability zone, was subsequently used for in vivo studies. The plasma profile, obtained after oral administration of the laminar extruded system in hard gelatine capsules, revealed the typical trend of an oral retard formulation. The application of the mixture experimental design associated to a desirability function permitted to optimize the extruded system and to determine the composition space that ensures final product quality. Copyright © 2015 Elsevier B.V. All rights reserved.
Tardif, Antoine; Shipley, Bill; Bloor, Juliette M. G.; Soussana, Jean-François
2014-01-01
Background and Aims The biomass-ratio hypothesis states that ecosystem properties are driven by the characteristics of dominant species in the community. In this study, the hypothesis was operationalized as community-weighted means (CWMs) of monoculture values and tested for predicting the decomposition of multispecies litter mixtures along an abiotic gradient in the field. Methods Decomposition rates (mg g−1 d−1) of litter from four herb species were measured using litter-bed experiments with the same soil at three sites in central France along a correlated climatic gradient of temperature and precipitation. All possible combinations from one to four species mixtures were tested over 28 weeks of incubation. Observed mixture decomposition rates were compared with those predicted by the biomass-ratio hypothesis. Variability of the prediction errors was compared with the species richness of the mixtures, across sites, and within sites over time. Key Results Both positive and negative prediction errors occurred. Despite this, the biomass-ratio hypothesis was true as an average claim for all sites (r = 0·91) and for each site separately, except for the climatically intermediate site, which showed mainly synergistic deviations. Variability decreased with increasing species richness and in less favourable climatic conditions for decomposition. Conclusions Community-weighted mean values provided good predictions of mixed-species litter decomposition, converging to the predicted values with increasing species richness and in climates less favourable to decomposition. Under a context of climate change, abiotic variability would be important to take into account when predicting ecosystem processes. PMID:24482152
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Measuring Variable Refractive Indices Using Digital Photos
ERIC Educational Resources Information Center
Lombardi, S.; Monroy, G.; Testa, I.; Sassi, E.
2010-01-01
A new procedure for performing quantitative measurements in teaching optics is presented. Application of the procedure to accurately measure the rate of change of the variable refractive index of a water-denatured alcohol mixture is described. The procedure can also be usefully exploited for measuring the constant refractive index of distilled…
Multi-stage continuous (chemostat) culture fermentation (MCCF) with variable fermentor volumes was carried out to study utilizing glucose and xylose for ethanol production by means of mixed sugar fermentation (MSF). Variable fermentor volumes were used to enable enhanced sugar u...
Using factorial experimental design to evaluate the separation of plastics by froth flotation.
Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa
2018-03-01
This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oroza, C.; Zheng, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.
2016-12-01
We present a structured, analytical approach to optimize ground-sensor placements based on time-series remotely sensed (LiDAR) data and machine-learning algorithms. We focused on catchments within the Merced and Tuolumne river basins, covered by the JPL Airborne Snow Observatory LiDAR program. First, we used a Gaussian mixture model to identify representative sensor locations in the space of independent variables for each catchment. Multiple independent variables that govern the distribution of snow depth were used, including elevation, slope, and aspect. Second, we used a Gaussian process to estimate the areal distribution of snow depth from the initial set of measurements. This is a covariance-based model that also estimates the areal distribution of model uncertainty based on the independent variable weights and autocorrelation. The uncertainty raster was used to strategically add sensors to minimize model uncertainty. We assessed the temporal accuracy of the method using LiDAR-derived snow-depth rasters collected in water-year 2014. In each area, optimal sensor placements were determined using the first available snow raster for the year. The accuracy in the remaining LiDAR surveys was compared to 100 configurations of sensors selected at random. We found the accuracy of the model from the proposed placements to be higher and more consistent in each remaining survey than the average random configuration. We found that a relatively small number of sensors can be used to accurately reproduce the spatial patterns of snow depth across the basins, when placed using spatial snow data. Our approach also simplifies sensor placement. At present, field surveys are required to identify representative locations for such networks, a process that is labor intensive and provides limited guarantees on the networks' representation of catchment independent variables.
Disease Mapping of Zero-excessive Mesothelioma Data in Flanders
Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel
2016-01-01
Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590
Disease mapping of zero-excessive mesothelioma data in Flanders.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel
2017-01-01
To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.
2015-06-01
This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
NASA Astrophysics Data System (ADS)
Hendarto, Eko; Suwarno
2018-02-01
There are hugh amount of traditional market organic wastes that may polute the environment. In general, the wastes are utilized for compost making and liquid fertilizer as well for plant. The use of liquid fertilizer from organic wastes of traditional markets opens up opportunities for misplaced cultivation of Setaria grass (Setaria splendida Stapf), which is required by ruminant farms. This research was conducted to evaluate the best mixture of water to the fertilizer in term of its effectiveness on the variables and experimental method using Completely Randomized Design. The treatments were: 6 doses of mixtures namely 0, 10, 20, 30, 40 and 50 liters of water, each of which was mixed with 10 liters of liquid fertilizer. The variables measured were the height, the numbers of tillers, the numbers of leaves, and canopy. The results of the study showed that the doses of water in the fertilizer did not indicate any significant differences (P > 0.05) on all variables being studied, however, the linear equation showed that greater concentrations of water in the fertilizer tended to decrease the growth of Setaria grass. Suggested use of water on the liquid fertilizer mixture should be not greater than 30 l - 10 l fertilizer.
M3FT-16OR0203052-Test Design for FeCrAl Alloy Tube Irradiation in HFIR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrani, Kurt A.; Petrie, Christian M.
2016-05-01
This calculation summarizes thermal analyses of a flexible rabbit design for irradiating a variety of pressurized water reactor (PWR) cladding materials (stainless steel, iron-chromium aluminum [FeCrAl], Zircaloy, and Inconel) with variable dimensions at a temperature of 350 °C in the flux trap of the High Flux Isotope Reactor (HFIR). The design can accommodate standard cladding for outer diameters (ODs) of approximately 9.50 mm with thickness ranging from 0.30 mm to 0.70 mm. The length is generally between 10 and 50 mm. The specimens contain moly inserts with a variable OD that provides the heat flux necessary to achieve the designmore » temperature with such a small fixed gas gap. The primary outer containment is an Al-6061 housing with a slightly enlarged inner diameter (ID) of 9.60 mm. The specimen temperature is controlled by determining a helium/argon gas mixture specific to the as-built specimen and housing. Variables that affect the required gas mixture are the cladding material (thermal expansion, density, heat generation rate), cladding OD, housing ID, and cladding ID. This calculation documents the analyses performed to determine required gas mixtures for a variety of scenarios.« less
Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation
NASA Astrophysics Data System (ADS)
Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter
2016-11-01
Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.
Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong
2012-09-01
We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.
Some comments on thermodynamic consistency for equilibrium mixture equations of state
Grove, John W.
2018-03-28
We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.
Etilé, Fabrice; Sharma, Anurag
2015-09-01
This study compares the impact of sugar-sweetened beverages (SSBs) tax between moderate and high consumers in Australia. The key methodological contribution is that price response heterogeneity is identified while controlling for censoring of consumption at zero and endogeneity of expenditure by using a finite mixture instrumental variable Tobit model. The SSB price elasticity estimates show a decreasing trend across increasing consumption quantiles, from -2.3 at the median to -0.2 at the 95th quantile. Although high consumers of SSBs have a less elastic demand for SSBs, their very high consumption levels imply that a tax would achieve higher reduction in consumption and higher health gains. Our results also suggest that an SSB tax would represent a small fiscal burden for consumers whatever their pre-policy level of consumption, and that an excise tax should be preferred to an ad valorem tax. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Stribling, Roscoe; Miller, Stanley L.
1987-01-01
Simulated prebiotic atmospheres containing either CH4, CO, or CO2, in addition to N2, H2O, and variable amounts of H2, were subjected to the spark from a high-frequency Tesla coil, and the energy yields for the syntheses of HCN and H2CO were estimated from periodic (every two days) measurements of the compound concentrations. The mixtures with CH4 were found to yield the highest amounts of HCN, whereas the CO mixtures produced the highest yields of H2CO. These results model atmospheric corona discharges. From the yearly energy yields calculated and the corona discharge available on the earth, the yearly production rate of HCN was estimated; using data on the HCN production rates and the experimental rates of decomposition of amino acids through the submarine vents, the steady state amino acid production rate in the primitive ocean was calculated to be about 10 nmoles/sq cm per year.
Retrieving Tract Variables From Acoustics: A Comparison of Different Machine Learning Strategies.
Mitra, Vikramjit; Nam, Hosung; Espy-Wilson, Carol Y; Saltzman, Elliot; Goldstein, Louis
2010-09-13
Many different studies have claimed that articulatory information can be used to improve the performance of automatic speech recognition systems. Unfortunately, such articulatory information is not readily available in typical speaker-listener situations. Consequently, such information has to be estimated from the acoustic signal in a process which is usually termed "speech-inversion." This study aims to propose and compare various machine learning strategies for speech inversion: Trajectory mixture density networks (TMDNs), feedforward artificial neural networks (FF-ANN), support vector regression (SVR), autoregressive artificial neural network (AR-ANN), and distal supervised learning (DSL). Further, using a database generated by the Haskins Laboratories speech production model, we test the claim that information regarding constrictions produced by the distinct organs of the vocal tract (vocal tract variables) is superior to flesh-point information (articulatory pellet trajectories) for the inversion process.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Silva, Emília; Daam, Michiel A; Cerejeira, Maria José
2015-05-01
Although pesticide regulatory tools are mainly based on individual substances, aquatic ecosystems are usually exposed to multiple pesticides from their use on the variety of crops within the catchment of a river. This study estimated the impact of measured pesticide mixtures in surface waters from 2002 and 2008 within three important Portuguese river basins ('Mondego', 'Sado' and 'Tejo') on primary producers, arthropods and fish by toxic pressure calculation. Species sensitivity distributions (SSDs), in combination with mixture toxicity models, were applied. Considering the differences in the responses of the taxonomic groups as well as in the pesticide exposures that these organisms experience, variable acute multi-substance potentially affected fractions (msPAFs) were obtained. The median msPAF for primary producers and arthropods in surface waters of all river basins exceeded 5%, the cut-off value used in the prospective SSD approach for deriving individual environmental quality standards. A ranking procedure identified various photosystem II inhibiting herbicides, with oxadiazon having the relatively largest toxic effects on primary producers, while the organophosphorus insecticides, chlorfenvinphos and chlorpyrifos, and the organochloride endosulfan had the largest effects on arthropods and fish, respectively. These results ensure compliance with European legislation with regard to ecological risk assessment and management of pesticides in surface waters. Copyright © 2015. Published by Elsevier B.V.
Borhan, Farrah Payyadhah; Abd Gani, Siti Salwa; Shamsuddin, Rosnah
2014-01-01
Okara, soybean waste from tofu and soymilk production, was utilised as a natural antioxidant in soap formulation for stratum corneum application. D-optimal mixture design was employed to investigate the influence of the main compositions of okara soap containing different fatty acid and oils (virgin coconut oil A (24-28% w/w), olive oil B (15-20% w/w), palm oil C (6-10% w/w), castor oil D (15-20% w/w), cocoa butter E (6-10% w/w), and okara F (2-7% w/w)) by saponification process on the response hardness of the soap. The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for okara soap hardness in terms of the six design factors considered in this study. Results revealed that the best mixture was the formulation that included 26.537% A, 19.999% B, 9.998% C, 16.241% D, 7.633% E, and 7.000% F. The results proved that the difference in the level of fatty acid and oils in the formulation significantly affects the hardness of soap. Depending on the desirable level of those six variables, creation of okara based soap with desirable properties better than those of commercial ones is possible.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, Jaroslav, E-mail: hasling@karlin.mff.cuni.cz; Stebel, Jan, E-mail: stebel@math.cas.cz
2011-04-15
We study the shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to the optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by the generalized Navier-Stokes system with nontrivial boundary conditions. This paper deals with numerical aspects of the problem.
2001-08-08
entropy inequality with independent variables consistent with several natural systems and apply the resulting constitutive theory near equi- librium...1973. [3] L. S. Bennethum and J. H. Cushman. Multiscale , hybrid mixture theory for swelling systems - I: Balance laws. International Journal of...Engineering Science, 34(2):125–145, 1996. [4] L. S. Bennethum and J. H. Cushman. Multiscale , hybrid mixture theory for swelling systems - II: Constitutive
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard
2016-08-01
Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.
NASA Technical Reports Server (NTRS)
Chappell, Lori J.; Cucinotta, Francis A.
2011-01-01
Radiation risks are estimated in a competing risk formalism where age or time after exposure estimates of increased risks for cancer and circulatory diseases are folded with a probability to survive to a given age. The survival function, also called the life-table, changes with calendar year, gender, smoking status and other demographic variables. An outstanding problem in risk estimation is the method of risk transfer between exposed populations and a second population where risks are to be estimated. Approaches used to transfer risks are based on: 1) Multiplicative risk transfer models -proportional to background disease rates. 2) Additive risk transfer model -risks independent of background rates. In addition, a Mixture model is often considered where the multiplicative and additive transfer assumptions are given weighted contributions. We studied the influence of the survival probability on the risk of exposure induced cancer and circulatory disease morbidity and mortality in the Multiplicative transfer model and the Mixture model. Risks for never-smokers (NS) compared to the average U.S. population are estimated to be reduced between 30% and 60% dependent on model assumptions. Lung cancer is the major contributor to the reduction for NS, with additional contributions from circulatory diseases and cancers of the stomach, liver, bladder, oral cavity, esophagus, colon, a portion of the solid cancer remainder, and leukemia. Greater improvements in risk estimates for NS s are possible, and would be dependent on improved understanding of risk transfer models, and elucidating the role of space radiation on the various stages of disease formation (e.g. initiation, promotion, and progression).
Modeling field-scale cosolvent flooding for DNAPL source zone remediation
NASA Astrophysics Data System (ADS)
Liang, Hailian; Falta, Ronald W.
2008-02-01
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation.
Liang, Hailian; Falta, Ronald W
2008-02-19
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten
2017-11-01
Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Numerical simulation of asphalt mixtures fracture using continuum models
NASA Astrophysics Data System (ADS)
Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz
2018-01-01
The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.
Sepulveda, L; Troncoso, F; Contreras, E; Palma, C
2008-09-01
The purpose of this study is to investigate the adsorption by peat of four reactive textile dyes with the following commercial names: Yellow CIBA WR 200% (Y), Dark Blue CIBA WR (DB), Navy CIBA WB (N), and Red CIBA WB 150% (R), used in a cotton-polyester fabric finishing plant. The decolorization levels obtained varied between 5% and 30%, and the most significant variables were pH and ionic strength. Equilibrium studies were carried out at pH 2.8 and temperature of 25 degrees C. Maximum adsorption capacities were between 15 and 20 mg g(-1). Experimental data were fitted to the models of Langmuir. The equilibrium studies for bisolute systems were DB-R and Y-N mixtures. The Langmuir extended model indicated that there is competition for adsorption sites and without interaction between dyes. The results of the kinetic adsorption studies on monosolute and bisolute systems were fitted to the film-pore diffusion, variable diffusivity and quasi-stationary models. They showed that the diffusivity coefficients obtained varied between 2.0 x 10(-8) and 8.5 x 10(-8) cm2s(-1) when the variable diffusivity mass transfer model (VDM) was used and effective diffusion coefficient was fitted between 3.3 x 10(-7) and 56.0 x 10(-7) cm2s(-1) for the film-pore diffusion model (FPDM). The root of average of squares relative error obtained varied between 0.8% and 47.0% for the VDM and FPDM models, respectively.
Predicting the shock compression response of heterogeneous powder mixtures
NASA Astrophysics Data System (ADS)
Fredenburg, D. A.; Thadhani, N. N.
2013-06-01
A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.
D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.
Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W
2005-12-01
Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Konishi, C.
2014-12-01
Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Tracking the visual focus of attention for a varying number of wandering people.
Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel
2008-07-01
We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.
A numerical study of granular dam-break flow
NASA Astrophysics Data System (ADS)
Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.
2017-12-01
Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.
Abdelrahman, Ahmed I.; Dai, Sheng; Thickett, Stuart C.; Ornatsky, Olga; Bandura, Dmitry; Baranov, Vladimir; Winnik, Mitchell A.
2009-01-01
We describe the synthesis and characterization of metal-encoded polystyrene microspheres by multiple-stage dispersion polymerization with diameters on the order of 2 µm and a very narrow size distribution. Different lanthanides were loaded into these microspheres through the addition of a mixture of LnCl3 salts and excess acrylic acid or acetoacetylethyl methacrylate (AAEM) dissolved in ethanol to the reaction after about 10% conversion of styrene, i.e., well after the particle nucleation stage was complete. Individual microspheres contain ca. 106 – 108 chelated lanthanide ions, of either a single element or a mixture of elements. These microspheres were characterized one-by-one utilizing a novel mass cytometer with an inductively coupled plasma (ICP) ionization source and time-of-flight (TOF) mass spectrometry detection. Microspheres containing a range of different metals at different levels of concentration were synthesized to meet the requirements of binary encoding and enumeration encoding protocols. With four different metals at five levels of concentration, we could achieve a variability of 624, and the strategy we report should allow one to obtain much larger variability. To demonstrate the usefulness of element-encoded beads for highly multiplexed immunoassays, we carried out a proof-of-principle model bioassay involving conjugation of mouse IgG to the surface of La and Tm containing particles, and its detection by an anti-mouse IgG bearing a metal-chelating polymer with Pr. PMID:19807075
Abdelrahman, Ahmed I; Dai, Sheng; Thickett, Stuart C; Ornatsky, Olga; Bandura, Dmitry; Baranov, Vladimir; Winnik, Mitchell A
2009-10-28
We describe the synthesis and characterization of metal-encoded polystyrene microspheres by multiple-stage dispersion polymerization with diameters on the order of 2 mum and a very narrow size distribution. Different lanthanides were loaded into these microspheres through the addition of a mixture of lanthanide salts (LnCl(3)) and excess acrylic acid (AA) or acetoacetylethyl methacrylate (AAEM) dissolved in ethanol to the reaction after about 10% conversion of styrene, that is, well after the particle nucleation stage was complete. Individual microspheres contain ca. 10(6)-10(8) chelated lanthanide ions, of either a single element or a mixture of elements. These microspheres were characterized one-by-one utilizing a novel mass cytometer with an inductively coupled plasma (ICP) ionization source and time-of-flight (TOF) mass spectrometry detection. Microspheres containing a range of different metals at different levels of concentration were synthesized to meet the requirements of binary encoding and enumeration encoding protocols. With four different metals at five levels of concentration, we could achieve a variability of 624, and the strategy we report should allow one to obtain much larger variability. To demonstrate the usefulness of element-encoded beads for highly multiplexed immunoassays, we carried out a proof-of-principle model bioassay involving conjugation of mouse IgG to the surface of La and Tm containing particles and its detection by an antimouse IgG bearing a metal-chelating polymer with Pr.
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Houghten, Richard A; Ganno, Michelle L; McLaughlin, Jay P; Dooley, Colette T; Eans, Shainnel O; Santos, Radleigh G; LaVoi, Travis; Nefzi, Adel; Welmaker, Greg; Giulianotti, Marc A; Toll, Lawrence
2016-01-11
The hypothesis in the current study is that the simultaneous direct in vivo testing of thousands to millions of systematically arranged mixture-based libraries will facilitate the identification of enhanced individual compounds. Individual compounds identified from such libraries may have increased specificity and decreased side effects early in the discovery phase. Testing began by screening ten diverse scaffolds as single mixtures (ranging from 17,340 to 4,879,681 compounds) for analgesia directly in the mouse tail withdrawal model. The "all X" mixture representing the library TPI-1954 was found to produce significant antinociception and lacked respiratory depression and hyperlocomotor effects using the Comprehensive Laboratory Animal Monitoring System (CLAMS). The TPI-1954 library is a pyrrolidine bis-piperazine and totals 738,192 compounds. This library has 26 functionalities at the first three positions of diversity made up of 28,392 compounds each (26 × 26 × 42) and 42 functionalities at the fourth made up of 19,915 compounds each (26 × 26 × 26). The 120 resulting mixtures representing each of the variable four positions were screened directly in vivo in the mouse 55 °C warm-water tail-withdrawal assay (ip administration). The 120 samples were then ranked in terms of their antinociceptive activity. The synthesis of 54 individual compounds was then carried out. Nine of the individual compounds produced dose-dependent antinociception equivalent to morphine. In practical terms what this means is that one would not expect multiexponential increases in activity as we move from the all-X mixture, to the positional scanning libraries, to the individual compounds. Actually because of the systematic formatting one would typically anticipate steady increases in activity as the complexity of the mixtures is reduced. This is in fact what we see in the current study. One of the final individual compounds identified, TPI 2213-17, lacked significant respiratory depression, locomotor impairment, or sedation. Our results represent an example of this unique approach for screening large mixture-based libraries directly in vivo to rapidly identify individual compounds.
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Mixture theory-based poroelasticity as a model of interstitial tissue growth.
Cowin, Stephen C; Cardoso, Luis
2012-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.
NASA Astrophysics Data System (ADS)
Cibulka, I.; Fontaine, J.-C.; Sosnkowska-Kehiaian, K.; Kehiaian, H. V.
This document is part of Subvolume A 'Binary Liquid Systems of Nonelectrolytes I' of Volume 26 'Heats of Mixing, Vapor-Liquid Equilibrium, and Volumetric Properties of Mixtures and Solutions' of Landolt-Börnstein Group IV 'Physical Chemistry'. It contains the Chapter 'Vapor-Liquid Equilibrium in the Mixture 1,1-Difluoroethane C2H4F2 + C4H8 2-Methylpropene (EVLM1131, LB5730_E)' providing data from direct measurement of pressure and mole fraction in vapor phase at variable mole fraction in liquid phase and constant temperature.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-12-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.
A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.
Chen, D G; Pounds, J G
1998-01-01
The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894
López Aca, Viviana; Gonzalez, Patricia Verónica; Carriquiriborde, Pedro
2018-05-09
Need for ecotoxicological information on local species has been recently highlighted as a priority issue in Latin America. In addition, little information has been found on concentration distances between lethal and sublethal effects, and the effect of mixtures at these two levels of analysis. Chlorpyrifos (CPF) is an organophosphate insecticide broadly used in soybean crops which has dramatically expanded in Latin America and other regions of the world. The aim of the present study was to evaluate lethal and sublethal effects of CPF, singly or in mixtures, on the inland "Pejerrey" (Odontesthes bonariensis) under laboratory conditions. Bioassays were performed using 15-30 d post hatch Pejerrey larvae. Six toxicity tests were run for estimating the average inter-assay dose-response curve of CPF and other six for assessing the effects of mixtures of CPF with endosulfan (EN) or lambda-cyhalothrin (LC), at three toxic units (TU) proportions (25:75, 50:50, 75:25). In addition, four assays were performed to describe the average inter-assay dose-response inhibition curve of acetylcholinesterase (AchE) for CPF alone and two for assessing the mixtures. The estimated 96 h-LC 50 for CPF was 2.26 ± 1.11 µg/L and the incipiency value was 0.048 ± 0.012 µg/L, placing this Neotropical species among the 13% of worldwide fish more sensitive to CPF. In addition, the 96 h-LC 50 for EN and LC were 0.30 ± 0.012 µg/L and 0.043 ± 0.031 µg/L, respectively. Therefore, relative toxicity of the three soybean insecticides for O. bonariensis was LC > EN > CPF. Effects of mixtures with EN and LC were variable, but in general fitted to both, independent action (IA) and concentration addition (CA) models. Slight antagonism was found when CPF TU proportions were above 50%. Therefore, from the regulatory point of view, the use of both mixture models, CA or IA, would be precautionary. Differential sensitivity to CPF was found for AchE inhibition at the head (96 h-IC 50 = 0.065 ± 0.058 µg/L) and the body (96 h-IC 50 = 0.48 ± 0.17 µg/L). In addition, whereas no significant effects induced by mixtures was observed in body AchE activity, antagonism was induced in head AchE inhibition in presence of both, EN and LC in the mixture. The lethal to sublethal ratio was close to 25.2 and 3.4 when comparing the CPF-LC 50 and IC 50 s for head and body AchE activity, respectively. However, considerable overlapping was observed between concentration-response curves, indicating that the use of AchE as biomarker for environmental monitoring would be limited.
McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C
2017-12-10
The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
NASA Technical Reports Server (NTRS)
Mustard, John F.
1993-01-01
A linear mixing model is used to model the spectral variability of an AVIRIS scene from the western foothills of the Sierra Nevada and calibrate these radiance data to reflectance. Five spectral endmembers from the AVIRIS data, plus an ideal 'shade' endmember were required to model the continuum reflectance of each pixel in the image. Three of the endmembers were interpreted to model the surface constituents green vegetation, dry grass, and illumination. Comparison of the fraction images to the bedrock geology maps indicates that substrate composition must be a factor contributing to the spectral properties of these endmembers. Detailed examination of the reflectance spectra of the three soil endmembers reveals that differences in the amount of ferric and ferrous iron and/or organic constituents in the soils is largely responsible for the differences in spectral properties of these endmembers.
Development of Efficient Real-Fluid Model in Simulating Liquid Rocket Injector Flows
NASA Technical Reports Server (NTRS)
Cheng, Gary; Farmer, Richard
2003-01-01
The characteristics of propellant mixing near the injector have a profound effect on the liquid rocket engine performance. However, the flow features near the injector of liquid rocket engines are extremely complicated, for example supercritical-pressure spray, turbulent mixing, and chemical reactions are present. Previously, a homogeneous spray approach with a real-fluid property model was developed to account for the compressibility and evaporation effects such that thermodynamics properties of a mixture at a wide range of pressures and temperatures can be properly calculated, including liquid-phase, gas- phase, two-phase, and dense fluid regions. The developed homogeneous spray model demonstrated a good success in simulating uni- element shear coaxial injector spray combustion flows. However, the real-fluid model suffered a computational deficiency when applied to a pressure-based computational fluid dynamics (CFD) code. The deficiency is caused by the pressure and enthalpy being the independent variables in the solution procedure of a pressure-based code, whereas the real-fluid model utilizes density and temperature as independent variables. The objective of the present research work is to improve the computational efficiency of the real-fluid property model in computing thermal properties. The proposed approach is called an efficient real-fluid model, and the improvement of computational efficiency is achieved by using a combination of a liquid species and a gaseous species to represent a real-fluid species.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
The Effect of Natural Osmolyte Mixtures on the Temperature-Pressure Stability of the Protein RNase A
NASA Astrophysics Data System (ADS)
Arns, Loana; Schuabb, Vitor; Meichsner, Shari; Berghaus, Melanie; Winter, Roland
2018-05-01
In biological cells, osmolytes appear as complex mixtures with variable compositions, depending on the particular environmental conditions of the organism. Based on various spectroscopic, thermodynamic and small-angle scattering data, we explored the effect of two different natural osmolyte mixtures, which are found in shallow-water and deep-sea shrimps, on the temperature and pressure stability of a typical monomeric protein, RNase A. Both natural osmolyte mixtures stabilize the protein against thermal and pressure denaturation. This effect seems to be mainly caused by the major osmolyte components of the osmolyte mixtures, i.e. by glycine and trimethylamine-N-oxide (TMAO), respectively. A minor compaction of the structure, in particular in the unfolded state, seems to be largely due to TMAO. Differences in thermodynamic properties observed for glycine and TMAO, and hence also for the two osmolyte mixtures, are most likely due to different solvation properties and interactions with the protein. Different from TMAO, glycine seems to interact with the amino acid side chains and/or the backbone of the protein, thus competing with hydration water and leading to a less hydrated protein surface.
Liaw, Horng-Jang; Wang, Tzu-Ai
2007-03-06
Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.
Facca, Bryan; Frame, Bill; Triesenberg, Steve
1998-01-01
Ceftizoxime is a widely used beta-lactam antimicrobial agent, but pharmacokinetic data for use with clinically ill patients are lacking. We studied the population pharmacokinetics of ceftizoxime in 72 clinically ill patients at a community-based, university-affiliated hospital. A population pharmacokinetic model for ceftizoxime was created by using a prospective observational design. Ceftizoxime was administered by continuous infusion to treat patients with proven or suspected bacterial infections. While the patients were receiving infusions of ceftizoxime, serum samples were collected for pharmacokinetic analysis with the nonlinear mixed-effect modeling program NONMEM. In addition to clearance and volume of distribution, various comorbidities were examined for their influence on the kinetics. All 72 subjects completed the study, and 114 serum samples were collected. Several demographic and comorbidity variables, namely, age, weight, serum creatinine levels, congestive heart failure, and long-term ventilator dependency, had a significant impact on the estimate for ceftizoxime clearance. A mixture model, or two populations for estimation of ceftizoxime clearance, was discovered. One population presented with an additive clearance component of 1.6 liters per h. In addition, a maximizer function for serum creatinine levels was found. In summary, two models for ceftizoxime clearance, mixture and nonmixture, were found and are presented. Clearance for ceftizoxime can be estimated with commonly available clinical information and the models presented. From the clearance estimates, the dose of ceftizoxime to maintain the desired concentration in serum can be determined. Work is needed to validate the model for drug clearance and to evaluate its predictive performance. PMID:9661021
Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teng, S.; Tebby, C.
Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less
Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method
NASA Astrophysics Data System (ADS)
Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.
2018-03-01
The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.
Model Selection Methods for Mixture Dichotomous IRT Models
ERIC Educational Resources Information Center
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo
2009-01-01
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Chemical mixtures in potable water in the U.S.
Ryker, Sarah J.
2014-01-01
In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.
Jiang, Yuhui; Shang, Yixuan; Yu, Shuyao; Liu, Jianguo
2018-01-01
Hexachlorobenzene (HCB) contamination of soils remains a significant environmental challenge all over the world. Reductive stabilization is a developing technology that can decompose the HCB with a dechlorination process. A nanometallic Al/CaO (n-Al/CaO) dispersion mixture was developed utilizing ball-milling technology in this study. The dechlorination efficiency of HCB in contaminated soils by the n-Al/CaO grinding treatment was evaluated. Response surface methodology (RSM) was employed to investigate the effects of three variables (soil moisture content, n-Al/CaO dosage and grinding time) and the interactions between these variables under the Box-Behnken Design (BBD). A high regression coefficient value (R2 = 0.9807) and low p value (<0.0001) of the quadratic model indicated that the model was accurate in predicting the experimental results. The optimal soil moisture content, n-Al/CaO dosage, and grinding time were found to be 7% (m/m), 17.7% (m/m), and 24 h, respectively, in the experimental ranges and levels. Under optimal conditions, the dechlorination efficiency was 80%. The intermediate product analysis indicated that dechlorination was the process by stepwise loss of chloride atoms. The main pathway observed within 24 h was HCB → pentachlorobenzene (PeCB) → 1,2,3,4-tetrachlorobenzene (TeCB) and 1,2,4,5-TeCB. The results indicated that the moderate soil moisture content was crucial for the hydrodechlorination of HCB. A probable mechanism was proposed wherein water acted like a hydrogen donor and promoted the hydrodechlorination process. The potential application of n-Al/CaO is an environmentally-friendly and cost-effective option for decontamination of HCB-contaminated soils. PMID:29702570