Sample records for multinomial mixture models

  1. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  2. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    USGS Publications Warehouse

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the hierarchical framework. We demonstrate the application of this contemporary population estimation method to address a longstanding stream fish management issue. We also detail the advantages and trade-offs of hierarchical population estimation methods relative to CPUE and estimation methods that model each site separately.

  3. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  4. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  5. A Dirichlet-Multinomial Bayes Classifier for Disease Diagnosis with Microbial Compositions.

    PubMed

    Gao, Xiang; Lin, Huaiying; Dong, Qunfeng

    2017-01-01

    Dysbiosis of microbial communities is associated with various human diseases, raising the possibility of using microbial compositions as biomarkers for disease diagnosis. We have developed a Bayes classifier by modeling microbial compositions with Dirichlet-multinomial distributions, which are widely used to model multicategorical count data with extra variation. The parameters of the Dirichlet-multinomial distributions are estimated from training microbiome data sets based on maximum likelihood. The posterior probability of a microbiome sample belonging to a disease or healthy category is calculated based on Bayes' theorem, using the likelihood values computed from the estimated Dirichlet-multinomial distribution, as well as a prior probability estimated from the training microbiome data set or previously published information on disease prevalence. When tested on real-world microbiome data sets, our method, called DMBC (for Dirichlet-multinomial Bayes classifier), shows better classification accuracy than the only existing Bayesian microbiome classifier based on a Dirichlet-multinomial mixture model and the popular random forest method. The advantage of DMBC is its built-in automatic feature selection, capable of identifying a subset of microbial taxa with the best classification accuracy between different classes of samples based on cross-validation. This unique ability enables DMBC to maintain and even improve its accuracy at modeling species-level taxa. The R package for DMBC is freely available at https://github.com/qunfengdong/DMBC. IMPORTANCE By incorporating prior information on disease prevalence, Bayes classifiers have the potential to estimate disease probability better than other common machine-learning methods. Thus, it is important to develop Bayes classifiers specifically tailored for microbiome data. Our method shows higher classification accuracy than the only existing Bayesian classifier and the popular random forest method, and thus provides an alternative option for using microbial compositions for disease diagnosis.

  6. Estimating wetland vegetation abundance from Landsat-8 operational land imager imagery: a comparison between linear spectral mixture analysis and multinomial logit modeling methods

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Liu, Ke

    2016-01-01

    Mapping vegetation abundance by using remote sensing data is an efficient means for detecting changes of an eco-environment. With Landsat-8 operational land imager (OLI) imagery acquired on July 31, 2013, both linear spectral mixture analysis (LSMA) and multinomial logit model (MNLM) methods were applied to estimate and assess the vegetation abundance in the Wild Duck Lake Wetland in Beijing, China. To improve mapping vegetation abundance and increase the number of endmembers in spectral mixture analysis, normalized difference vegetation index was extracted from OLI imagery along with the seven reflective bands of OLI data for estimating the vegetation abundance. Five endmembers were selected, which include terrestrial plants, aquatic plants, bare soil, high albedo, and low albedo. The vegetation abundance mapping results from Landsat OLI data were finally evaluated by utilizing a WorldView-2 multispectral imagery. Similar spatial patterns of vegetation abundance produced by both fully constrained LSMA algorithm and MNLM methods were observed: higher vegetation abundance levels were distributed in agricultural and riparian areas while lower levels in urban/built-up areas. The experimental results also indicate that the MNLM model outperformed the LSMA algorithm with smaller root mean square error (0.0152 versus 0.0252) and higher coefficient of determination (0.7856 versus 0.7214) as the MNLM model could handle the nonlinear reflection phenomenon better than the LSMA with mixed pixels.

  7. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  8. A Process View on Implementing an Antibullying Curriculum: How Teachers Differ and What Explains the Variation

    ERIC Educational Resources Information Center

    Haataja, Anne; Ahtola, Annarilla; Poskiparta, Elisa; Salmivalli, Christina

    2015-01-01

    The present study provides a person-centered view on teachers' adherence to the KiVa antibullying curriculum over a school year. Factor mixture modeling was used to examine how teachers (N = 282) differed in their implementation profiles and multinomial logistic regression was used to identify factors related to these profiles. On the basis of…

  9. Modeling health survey data with excessive zero and K responses.

    PubMed

    Lin, Ting Hsiang; Tsai, Min-Hsiao

    2013-04-30

    Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  11. A general class of multinomial mixture models for anuran calling survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2005-01-01

    We propose a general framework for modeling anuran abundance using data collected from commonly used calling surveys. The data generated from calling surveys are indices of calling intensity (vocalization of males) that do not have a precise link to actual population size and are sensitive to factors that influence anuran behavior. We formulate a model for calling-index data in terms of the maximum potential calling index that could be observed at a site (the 'latent abundance class'), given its underlying breeding population, and we focus attention on estimating the distribution of this latent abundance class. A critical consideration in estimating the latent structure is imperfect detection, which causes the observed abundance index to be less than or equal to the latent abundance class. We specify a multinomial sampling model for the observed abundance index that is conditional on the latent abundance class. Estimation of the latent abundance class distribution is based on the marginal likelihood of the index data, having integrated over the latent class distribution. We apply the proposed modeling framework to data collected as part of the North American Amphibian Monitoring Program (NAAMP).

  12. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  13. Mixture Model and MDSDCA for Textual Data

    NASA Astrophysics Data System (ADS)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  14. Multiple-Shrinkage Multinomial Probit Models with Applications to Simulating Geographies in Public Use Data.

    PubMed

    Burgette, Lane F; Reiter, Jerome P

    2013-06-01

    Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.

  15. Markov switching multinomial logit model: An application to accident-injury severities.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2009-07-01

    In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.

  16. Widen NomoGram for multinomial logistic regression: an application to staging liver fibrosis in chronic hepatitis C patients.

    PubMed

    Ardoino, Ilaria; Lanzoni, Monica; Marano, Giuseppe; Boracchi, Patrizia; Sagrini, Elisabetta; Gianstefani, Alice; Piscaglia, Fabio; Biganzoli, Elia M

    2017-04-01

    The interpretation of regression models results can often benefit from the generation of nomograms, 'user friendly' graphical devices especially useful for assisting the decision-making processes. However, in the case of multinomial regression models, whenever categorical responses with more than two classes are involved, nomograms cannot be drawn in the conventional way. Such a difficulty in managing and interpreting the outcome could often result in a limitation of the use of multinomial regression in decision-making support. In the present paper, we illustrate the derivation of a non-conventional nomogram for multinomial regression models, intended to overcome this issue. Although it may appear less straightforward at first sight, the proposed methodology allows an easy interpretation of the results of multinomial regression models and makes them more accessible for clinicians and general practitioners too. Development of prediction model based on multinomial logistic regression and of the pertinent graphical tool is illustrated by means of an example involving the prediction of the extent of liver fibrosis in hepatitis C patients by routinely available markers.

  17. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    PubMed Central

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  18. Insights into the latent multinomial model through mark-resight data on female grizzly bears with cubs-of-the-year

    USGS Publications Warehouse

    Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.

    2013-01-01

    Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.

  19. Modeling Unconscious Gender Bias in Fame Judgments: Finding the Proper Branch of the Correct (Multinomial) Tree

    PubMed

    Draine; Greenwald; Banaji

    1996-03-01

    In the preceding article, Buchner and Wippich used a guessing-corrected, multinomial process-dissociation analysis to test whether a gender bias in fame judgments reported by Banaji and Greenwald (Journal of Personality and Social Psychology, 1995, 68, 181-198) was unconscious. In their two experiments, Buchner and Wippich found no evidence for unconscious mediation of this gender bias. Their conclusion can be questioned by noting that (a) the gender difference in familiarity of previously seen names that Buchner and Wippich modeled was different from the gender difference in criterion for fame judgments reported by Banaji and Greenwald, (b) the assumptions of Buchner and Wippich's multinomial model excluded processes that are plausibly involved in the fame judgment task, and (c) the constructs of Buchner and Wippich's model that corresponded most closely to Banaji and Greenwald's gender-bias interpretation were formulated so as to preclude the possibility of modeling that interpretation. Perhaps a more complex multinomial model can model the Banaji and Greenwald interpretation.

  20. Modeling unconscious gender bias in fame judgments: finding the proper branch of the correct (multinomial) tree.

    PubMed

    Draine, S C; Greenwald, A G; Banaji, M R

    1996-01-01

    In the preceding article, Buchner and Wippich used a guessing-corrected, multinomial process-dissociation analysis to test whether a gender bias in fame judgements reported by Banaji and Greenwald (Journal of Personality and Social Psychology, 1995, 68, 181-198) was unconscious. In their two experiments, Buchner and Wippich found no evidence for unconscious mediation of this gender bias. Their conclusion can be questioned by noting that (a) the gender difference in familiarity of previously seen names that Buchner and Wippich modeled was different from the gender difference in criterion for fame judgements reported by Banaji and Greenwald, (b) the assumptions of Buchner and Wippich's multinomial model excluded processes that are plausibly involved in the fame judgement task, and (c) the constructs of Buchner and Wippich's model that corresponded most closely to Banaji and Greenwald's gender-bias interpretation were formulated so as to preclude the possibility of modeling that interpretation. Perhaps a more complex multinomial model can model the Banaji and Greenwald interpretation.

  1. Analysis of multinomial models with unknown index using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, R.M.; Link, W.A.

    2007-01-01

    Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.

  2. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  3. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  4. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  5. Modeling Polytomous Item Responses Using Simultaneously Estimated Multinomial Logistic Regression Models

    ERIC Educational Resources Information Center

    Anderson, Carolyn J.; Verkuilen, Jay; Peyton, Buddy L.

    2010-01-01

    Survey items with multiple response categories and multiple-choice test questions are ubiquitous in psychological and educational research. We illustrate the use of log-multiplicative association (LMA) models that are extensions of the well-known multinomial logistic regression model for multiple dependent outcome variables to reanalyze a set of…

  6. Multinomial logistic regression analysis for differentiating 3 treatment outcome trajectory groups for headache-associated disability.

    PubMed

    Lewis, Kristin Nicole; Heckman, Bernadette Davantes; Himawan, Lina

    2011-08-01

    Growth mixture modeling (GMM) identified latent groups based on treatment outcome trajectories of headache disability measures in patients in headache subspecialty treatment clinics. Using a longitudinal design, 219 patients in headache subspecialty clinics in 4 large cities throughout Ohio provided data on their headache disability at pretreatment and 3 follow-up assessments. GMM identified 3 treatment outcome trajectory groups: (1) patients who initiated treatment with elevated disability levels and who reported statistically significant reductions in headache disability (high-disability improvers; 11%); (2) patients who initiated treatment with elevated disability but who reported no reductions in disability (high-disability nonimprovers; 34%); and (3) patients who initiated treatment with moderate disability and who reported statistically significant reductions in headache disability (moderate-disability improvers; 55%). Based on the final multinomial logistic regression model, a dichotomized treatment appointment attendance variable was a statistically significant predictor for differentiating high-disability improvers from high-disability nonimprovers. Three-fourths of patients who initiated treatment with elevated disability levels did not report reductions in disability after 5 months of treatment with new preventive pharmacotherapies. Preventive headache agents may be most efficacious for patients with moderate levels of disability and for patients with high disability levels who attend all treatment appointments. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  7. Composite Linear Models | Division of Cancer Prevention

    Cancer.gov

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  8. Multinomial Logistic Regression Predicted Probability Map To Visualize The Influence Of Socio-Economic Factors On Breast Cancer Occurrence in Southern Karnataka

    NASA Astrophysics Data System (ADS)

    Madhu, B.; Ashok, N. C.; Balasubramanian, S.

    2014-11-01

    Multinomial logistic regression analysis was used to develop statistical model that can predict the probability of breast cancer in Southern Karnataka using the breast cancer occurrence data during 2007-2011. Independent socio-economic variables describing the breast cancer occurrence like age, education, occupation, parity, type of family, health insurance coverage, residential locality and socioeconomic status of each case was obtained. The models were developed as follows: i) Spatial visualization of the Urban- rural distribution of breast cancer cases that were obtained from the Bharat Hospital and Institute of Oncology. ii) Socio-economic risk factors describing the breast cancer occurrences were complied for each case. These data were then analysed using multinomial logistic regression analysis in a SPSS statistical software and relations between the occurrence of breast cancer across the socio-economic status and the influence of other socio-economic variables were evaluated and multinomial logistic regression models were constructed. iii) the model that best predicted the occurrence of breast cancer were identified. This multivariate logistic regression model has been entered into a geographic information system and maps showing the predicted probability of breast cancer occurrence in Southern Karnataka was created. This study demonstrates that Multinomial logistic regression is a valuable tool for developing models that predict the probability of breast cancer Occurrence in Southern Karnataka.

  9. A Multinomial Model of Event-Based Prospective Memory

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2004-01-01

    Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…

  10. CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS

    PubMed Central

    McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.

    2014-01-01

    The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026

  11. Bayesian multinomial probit modeling of daily windows of susceptibility for maternal PM2.5 exposure and congenital heart defects

    EPA Science Inventory

    Past epidemiologic studies suggest maternal ambient air pollution exposure during critical periods of the pregnancy is associated with fetal development. We introduce a multinomial probit model that allows for the joint identification of susceptible daily periods during the pregn...

  12. Institutional Climate and Student Departure: A Multinomial Multilevel Modeling Approach

    ERIC Educational Resources Information Center

    Yi, Pyong-sik

    2008-01-01

    This study applied a multinomial HOLM technique to examine the extent to which the institutional climate for diversity influences the different types of college student withdrawal, such as stop out, drop out, and transfer. Based on a reformulation of Tinto's model along with the conceptualization of institutional climate for diversity by Hurtado…

  13. An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.

    ERIC Educational Resources Information Center

    Bockenholt, Ulf

    1999-01-01

    Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…

  14. A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.

    PubMed

    Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S

    2016-03-01

    A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.

  15. A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology.

    PubMed

    Plieninger, Hansjörg; Heck, Daniel W

    2018-05-29

    When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt ( 2012a ) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.

  16. Recommender system based on scarce information mining.

    PubMed

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The Source of Adult Age Differences in Event-Based Prospective Memory: A Multinomial Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2006-01-01

    Event-based prospective memory involves remembering to perform an action in response to a particular future event. Normal younger and older adults performed event-based prospective memory tasks in 2 experiments. The authors applied a formal multinomial processing tree model of prospective memory (Smith & Bayen, 2004) to disentangle age differences…

  18. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  19. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Lateralization of temporal lobe epilepsy by multimodal multinomial hippocampal response-driven models.

    PubMed

    Nazem-Zadeh, Mohammad-Reza; Elisevich, Kost V; Schwalb, Jason M; Bagher-Ebadian, Hassan; Mahmoudi, Fariborz; Soltanian-Zadeh, Hamid

    2014-12-15

    Multiple modalities are used in determining laterality in mesial temporal lobe epilepsy (mTLE). It is unclear how much different imaging modalities should be weighted in decision-making. The purpose of this study is to develop response-driven multimodal multinomial models for lateralization of epileptogenicity in mTLE patients based upon imaging features in order to maximize the accuracy of noninvasive studies. The volumes, means and standard deviations of FLAIR intensity and means of normalized ictal-interictal SPECT intensity of the left and right hippocampi were extracted from preoperative images of a retrospective cohort of 45 mTLE patients with Engel class I surgical outcomes, as well as images of a cohort of 20 control, nonepileptic subjects. Using multinomial logistic function regression, the parameters of various univariate and multivariate models were estimated. Based on the Bayesian model averaging (BMA) theorem, response models were developed as compositions of independent univariate models. A BMA model composed of posterior probabilities of univariate response models of hippocampal volumes, means and standard deviations of FLAIR intensity, and means of SPECT intensity with the estimated weighting coefficients of 0.28, 0.32, 0.09, and 0.31, respectively, as well as a multivariate response model incorporating all mentioned attributes, demonstrated complete reliability by achieving a probability of detection of one with no false alarms to establish proper laterality in all mTLE patients. The proposed multinomial multivariate response-driven model provides a reliable lateralization of mesial temporal epileptogenicity including those patients who require phase II assessment. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. A measurement theory of illusory conjunctions.

    PubMed

    Prinzmetal, William; Ivry, Richard B; Beck, Diane; Shimizu, Naomi

    2002-04-01

    Illusory conjunctions refer to the incorrect perceptual combination of correctly perceived features, such as color and shape. Research on the phenomenon has been hampered by the lack of a measurement theory that accounts for guessing features, as well as the incorrect combination of correctly perceived features. Recently, several investigators have suggested using multinomial models as a tool for measuring feature integration. The authors examined the adequacy of these models in 2 experiments by testing whether model parameters reflect changes in stimulus factors. In a third experiment, confidence ratings were used as a tool for testing the model. Multinomial models accurately reflected both variations in stimulus factors and observers' trial-by-trial confidence ratings.

  5. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    NASA Astrophysics Data System (ADS)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  6. Who Is Overeducated and Why? Probit and Dynamic Mixed Multinomial Logit Analyses of Vertical Mismatch in East and West Germany

    ERIC Educational Resources Information Center

    Boll, Christina; Leppin, Julian Sebastian; Schömann, Klaus

    2016-01-01

    Overeducation potentially signals a productivity loss. With Socio-Economic Panel data from 1984 to 2011 we identify drivers of educational mismatch for East and West medium and highly educated Germans. Addressing measurement error, state dependence and unobserved heterogeneity, we run dynamic mixed multinomial logit models for three different…

  7. Quality and provider choice: a multinomial logit-least-squares model with selectivity.

    PubMed Central

    Haas-Wilson, D; Savoca, E

    1990-01-01

    A Federal Trade Commission survey of contact lens wearers is used to estimate a multinomial logit-least-squares model of the joint determination of provider choice and quality of care in the contact lens industry. The effect of personal and industry characteristics on a consumer's choice among three types of providers--opticians, ophthalmologists, and optometrists--is estimated via multinomial logit. The regression model of the quality of care has two features that distinguish it from previous work in the area. First, it uses an outcome rather than a structural or process measure of quality. Quality is measured as an index of the presence of seven potentially pathological eye conditions caused by poorly fitted lenses. Second, the model controls for possible selection bias that may arise from the fact that the sample observations on quality are generated by consumers' nonrandom choices of providers. The multinomial logit estimates of provider choice indicate that professional regulations limiting the commercial practices of optometrists shift demand for contact lens services away from optometrists toward ophthalmologists. Further, consumers are more likely to have their lenses fitted by opticians in states that require the licensing of opticians. The regression analysis of variations in quality across provider types shows a strong positive selection bias in the estimate of the quality of care received by consumers of ophthalmologists' services. Failure to control for this selection bias results in an overestimate of the quality of care provided by ophthalmologists. PMID:2312308

  8. Modeling the dynamics of urban growth using multinomial logistic regression: a case study of Jiayu County, Hubei Province, China

    NASA Astrophysics Data System (ADS)

    Nong, Yu; Du, Qingyun; Wang, Kun; Miao, Lei; Zhang, Weiwei

    2008-10-01

    Urban growth modeling, one of the most important aspects of land use and land cover change study, has attracted substantial attention because it helps to comprehend the mechanisms of land use change thus helps relevant policies made. This study applied multinomial logistic regression to model urban growth in the Jiayu county of Hubei province, China to discover the relationship between urban growth and the driving forces of which biophysical and social-economic factors are selected as independent variables. This type of regression is similar to binary logistic regression, but it is more general because the dependent variable is not restricted to two categories, as those previous studies did. The multinomial one can simulate the process of multiple land use competition between urban land, bare land, cultivated land and orchard land. Taking the land use type of Urban as reference category, parameters could be estimated with odds ratio. A probability map is generated from the model to predict where urban growth will occur as a result of the computation.

  9. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less

  10. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    NASA Astrophysics Data System (ADS)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam

    2015-10-01

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.

  11. Classifying emotion in Twitter using Bayesian network

    NASA Astrophysics Data System (ADS)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  12. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction.

    PubMed

    Tian, Xinyu; Wang, Xuefeng; Chen, Jun

    2014-01-01

    Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.

  13. Parameter Estimation for the Dirichlet-Multinomial Distribution Using Supplementary Beta-Binomial Data.

    DTIC Science & Technology

    1987-07-01

    multinomial distribution as a magazine exposure model. J. of Marketing Research . 21, 100-106. Lehmann, E.L. (1983). Theory of Point Estimation. John Wiley and... Marketing Research . 21, 89-99. V I flWflW WflW~WWMWSS tWN ,rw fl rwwrwwr-w~ w-. ~. - - -- .~ 𔃾 4’.) ~a 4’ ., 𔃾. ’-4. .4.: .4~ I .4. ~J3iAf a,’ -a’ 4

  14. Predicting longitudinal trajectories of health probabilities with random-effects multinomial logit regression.

    PubMed

    Liu, Xian; Engel, Charles C

    2012-12-20

    Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Regression Models For Multivariate Count Data

    PubMed Central

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2016-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data. PMID:28348500

  16. Regression Models For Multivariate Count Data.

    PubMed

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  17. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  18. Direct and interactive effects of parent, friend and schoolmate drinking on alcohol use trajectories.

    PubMed

    Lynch, Alicia Doyle; Coley, Rebekah Levine; Sims, Jacqueline; Lombardi, Caitlin McPherran; Mahalik, James R

    2015-01-01

    This study considered the unique and interactive roles of social norms from parents, friends and schools in predicting developmental trajectories of adolescent drinking and intoxication. Using data from the National Longitudinal Study of Adolescent Health, which followed adolescents (N = 18,921) for 13 years, we used discrete mixture modelling to identify unique developmental trajectories of drinking and of intoxication. Next, multilevel multinomial regression models examined the role of alcohol-related social norms from parents, friends and schoolmates in the prediction of youths' trajectory group membership. Results demonstrated that social norms from parents, friends and schoolmates that were favourable towards alcohol use uniquely predicted drinking and intoxication trajectory group membership. Interactions between social norms revealed that schoolmate drinking played an important moderating role, frequently augmenting social norms from parents and friends. The current findings suggest that social norms from multiple sources (parents, friends and schools) work both independently and interactively to predict longitudinal trajectories of adolescent alcohol use. Results highlight the need to identify and understand social messages from multiple developmental contexts in efforts to reduce adolescent alcohol consumption and alcohol-related risk-taking.

  19. MPTinR: analysis of multinomial processing tree models in R.

    PubMed

    Singmann, Henrik; Kellen, David

    2013-06-01

    We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .

  20. A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory.

    PubMed

    Stahl, Christoph; Klauer, Karl Christoph

    2008-05-01

    The distinction between verbatim and gist memory traces has furthered the understanding of numerous phenomena in various fields, such as false memory research, research on reasoning and decision making, and cognitive development. To measure verbatim and gist memory empirically, an experimental paradigm and multinomial measurement model has been proposed but rarely applied. In the present article, a simplified conjoint recognition paradigm and multinomial model is introduced and validated as a measurement tool for the separate assessment of verbatim and gist memory processes. A Bayesian metacognitive framework is applied to validate guessing processes. Extensions of the model toward incorporating the processes of phantom recollection and erroneous recollection rejection are discussed.

  1. Multinomial model and zero-inflated gamma model to study time spent on leisure time physical activity: an example of ELSA-Brasil.

    PubMed

    Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora

    2017-08-17

    To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.

  2. Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties

    NASA Astrophysics Data System (ADS)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.

  3. Developmental Trajectories and Predictors of Prosocial Behavior Among Adolescents Exposed to the 2008 Wenchuan Earthquake.

    PubMed

    Qin, Yanyun; Zhou, Ya; Fan, Fang; Chen, Shijian; Huang, Rong; Cai, Rouna; Peng, Ting

    2016-02-01

    This longitudinal study examined the developmental trajectories of prosocial behavior and related predictors among adolescents exposed to the 2008 Wenchuan earthquake. At 6-, 18-, and 30-months postearthquake, we followed a sample of 1,573 adolescents. Self-report measures were used to assess earthquake exposure, postearthquake negative life events, prosocial behavior, symptoms of posttraumatic stress disorder, depression, anxiety, social support, and coping style. Data were analyzed using growth mixture modeling and multinomial logistic regressions. Four trajectories of postearthquake prosocial behavior were identified in the sample: (a) high/enhancing (35.0%), (b) high/stable (29.4%), (c) low/declining (33.6%), and (d) low/steeply declining (2.0%). Female gender, more social support, and greater positive coping were significant factors related to a higher probability of developing the high/enhancing trajectory. These findings may be helpful for us to identify adolescents with poor prosocial behavior after exposure to earthquakes so as to provide them with appropriate intervention. Copyright © 2016 International Society for Traumatic Stress Studies.

  4. Landscape effects on diets of two canids in Northwestern Texas: A multinomial modeling approach

    USGS Publications Warehouse

    Lemons, P.R.; Sedinger, J.S.; Herzog, M.P.; Gipson, P.S.; Gilliland, R.L.

    2010-01-01

    Analyses of feces, stomach contents, and regurgitated pellets are common techniques for assessing diets of vertebrates and typically contain more than 1 food item per sampling unit. When analyzed, these individual food items have traditionally been treated as independent, which represents pseudoreplication. When food types are recorded as present or absent, these samples can be treated as multinomial vectors of food items, with each vector representing 1 realization of a possible diet. We suggest such data have a similar structure to capture histories for closed-capture, capturemarkrecapture data. To assess the effects of landscapes and presence of a potential competitor, we used closed-capture models implemented in program MARK into analyze diet data generated from feces of swift foxes (Vulpes velox) and coyotes (Canis latrans) in northwestern Texas. The best models of diet contained season and location for both swift foxes and coyotes, but year accounted for less variation, suggesting that landscape type is an important predictor of diets of both species. Models containing the effect of coyote reduction were not competitive (??QAICc 53.6685), consistent with the hypothesis that presence of coyotes did not influence diet of swift foxes. Our findings suggest that landscape type may have important influences on diets of both species. We believe that multinomial models represent an effective approach to assess hypotheses when diet studies have a data structure similar to ours. ?? 2010 American Society of Mammalogists.

  5. Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis

    ERIC Educational Resources Information Center

    Ansari, Asim; Iyengar, Raghuram

    2006-01-01

    We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…

  6. Arsenic exposure and oral cavity lesions in Bangladesh.

    PubMed

    Syed, Emdadul H; Melkonian, Stephanie; Poudel, Krishna C; Yasuoka, Junko; Otsuka, Keiko; Ahmed, Alauddin; Islam, Tariqul; Parvez, Faruque; Slavkovich, Vesna; Graziano, Joseph H; Ahsan, Habibul; Jimba, Masamine

    2013-01-01

    To evaluate the relationship between arsenic exposure and oral cavity lesions among an arsenic-exposed population in Bangladesh. We carried out an analysis utilizing the baseline data of the Health Effects of Arsenic Exposure Longitudinal Study, which is an ongoing population-based cohort study to investigate health outcomes associated with arsenic exposure via drinking water in Araihazar, Bangladesh. We used multinomial regression models to estimate the risk of oral cavity lesions. Participants with high urinary arsenic levels (286.1 to 5000.0 μg/g) were more likely to develop arsenical lesions of the gums (multinomial odds ratio = 2.90; 95% confidence interval, 1.11 to 7.54), and tongue (multinomial odds ratio = 2.79; 95% confidence interval, 1.51 to 5.15), compared with those with urinary arsenic levels of 7.0 to 134.0 μg/g. Higher level of arsenic exposure was positively associated with increased arsenical lesions of the gums and tongue.

  7. An investigation on fatality of drivers in vehicle-fixed object accidents on expressways in China: Using multinomial logistic regression model.

    PubMed

    Peng, Yong; Peng, Shuangling; Wang, Xinghua; Tan, Shiyang

    2018-06-01

    This study aims to identify the effects of characteristics of vehicle, roadway, driver, and environment on fatality of drivers in vehicle-fixed object accidents on expressways in Changsha-Zhuzhou-Xiangtan district of Hunan province in China by developing multinomial logistic regression models. For this purpose, 121 vehicle-fixed object accidents from 2011-2017 are included in the modeling process. First, descriptive statistical analysis is made to understand the main characteristics of the vehicle-fixed object crashes. Then, 19 explanatory variables are selected, and correlation analysis of each two variables is conducted to choose the variables to be concluded. Finally, five multinomial logistic regression models including different independent variables are compared, and the model with best fitting and prediction capability is chosen as the final model. The results showed that the turning direction in avoiding fixed objects raised the possibility that drivers would die. About 64% of drivers died in the accident were found being ejected out of the car, of which 50% did not use a seatbelt before the fatal accidents. Drivers are likely to die when they encounter bad weather on the expressway. Drivers with less than 10 years of driving experience are more likely to die in these accidents. Fatigue or distracted driving is also a significant factor in fatality of drivers. Findings from this research provide an insight into reducing fatality of drivers in vehicle-fixed object accidents.

  8. Multinomial logistic regression in workers' health

    NASA Astrophysics Data System (ADS)

    Grilo, Luís M.; Grilo, Helena L.; Gonçalves, Sónia P.; Junça, Ana

    2017-11-01

    In European countries, namely in Portugal, it is common to hear some people mentioning that they are exposed to excessive and continuous psychosocial stressors at work. This is increasing in diverse activity sectors, such as, the Services sector. A representative sample was collected from a Portuguese Services' organization, by applying a survey (internationally validated), which variables were measured in five ordered categories in Likert-type scale. A multinomial logistic regression model is used to estimate the probability of each category of the dependent variable general health perception where, among other independent variables, burnout appear as statistically significant.

  9. Verbal Ability and Persistent Offending: A Race-Specific Test of Moffitt's Theory

    PubMed Central

    Bellair, Paul E.; McNulty, Thomas L.; Piquero, Alex R.

    2014-01-01

    Theoretical questions linger over the applicability of the verbal ability model to African Americans and the social control theory hypothesis that educational failure mediates the effect of verbal ability on offending patterns. Accordingly, this paper investigates whether verbal ability distinguishes between offending groups within the context of Moffitt's developmental taxonomy. Questions are addressed with longitudinal data spanning childhood through young-adulthood from an ongoing national panel, and multinomial and hierarchical Poisson models (over-dispersed). In multinomial models, low verbal ability predicts membership in a life-course-persistent-oriented group relative to an adolescent-limited-oriented group. Hierarchical models indicate that verbal ability is associated with arrest outcomes among White and African American subjects, with effects consistently operating through educational attainment (high school dropout). The results support Moffitt's hypothesis that verbal deficits distinguish adolescent-limited- and life-course-persistent-oriented groups within race as well as the social control model of verbal ability. PMID:26924885

  10. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  11. Uncovering a latent multinomial: Analysis of mark-recapture data with misidentification

    USGS Publications Warehouse

    Link, W.A.; Yoshizaki, J.; Bailey, L.L.; Pollock, K.H.

    2010-01-01

    Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture-recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture-recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies f are a known linear transformation f = A???x of a latent multinomial variable x with cell probability vector ?? = ??(??). Given that full conditional distributions [?? | x] can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution [x | f, ??], which is made possible by knowledge of the null space of A???. We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks. ?? 2009, The International Biometric Society.

  12. Uncovering a Latent Multinomial: Analysis of Mark-Recapture Data with Misidentification

    USGS Publications Warehouse

    Link, W.A.; Yoshizaki, J.; Bailey, L.L.; Pollock, K.H.

    2009-01-01

    Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture-recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture-recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies f are a known linear transformation f=A'x of a latent multinomial variable x with cell probability vector pi= pi(theta). Given that full conditional distributions [theta | x] can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution [x | f, theta], which is made possible by knowledge of the null space of A'. We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks.

  13. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  14. Classification of Effective Soil Depth by Using Multinomial Logistic Regression Analysis

    NASA Astrophysics Data System (ADS)

    Chang, C. H.; Chan, H. C.; Chen, B. A.

    2016-12-01

    Classification of effective soil depth is a task of determining the slopeland utilizable limitation in Taiwan. The "Slopeland Conservation and Utilization Act" categorizes the slopeland into agriculture and husbandry land, land suitable for forestry and land for enhanced conservation according to the factors including average slope, effective soil depth, soil erosion and parental rock. However, sit investigation of the effective soil depth requires a cost-effective field work. This research aimed to classify the effective soil depth by using multinomial logistic regression with the environmental factors. The Wen-Shui Watershed located at the central Taiwan was selected as the study areas. The analysis of multinomial logistic regression is performed by the assistance of a Geographic Information Systems (GIS). The effective soil depth was categorized into four levels including deeper, deep, shallow and shallower. The environmental factors of slope, aspect, digital elevation model (DEM), curvature and normalized difference vegetation index (NDVI) were selected for classifying the soil depth. An Error Matrix was then used to assess the model accuracy. The results showed an overall accuracy of 75%. At the end, a map of effective soil depth was produced to help planners and decision makers in determining the slopeland utilizable limitation in the study areas.

  15. Disentangling stereotype activation and stereotype application in the stereotype misperception task.

    PubMed

    Krieglmeyer, Regina; Sherman, Jeffrey W

    2012-08-01

    When forming impressions about other people, stereotypes about the individual's social group often influence the resulting impression. At least 2 distinguishable processes underlie stereotypic impression formation: stereotype activation and stereotype application. Most previous research has used implicit measures to assess stereotype activation and explicit measures to assess stereotype application, which has several disadvantages. The authors propose a measure of stereotypic impression formation, the stereotype misperception task (SMT), together with a multinomial model that quantitatively disentangles the contributions of stereotype activation and application to responses in the SMT. The validity of the SMT and of the multinomial model was confirmed in 5 studies. The authors hope to advance research on stereotyping by providing a measurement tool that separates multiple processes underlying impression formation.

  16. Discrete post-processing of total cloud cover ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian

    2017-04-01

    This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.

  17. Impact of childhood asthma on growth trajectories in early adolescence: Findings from the Childhood Asthma Prevention Study (CAPS).

    PubMed

    Movin, Maria; Garden, Frances L; Protudjer, Jennifer L P; Ullemar, Vilhelmina; Svensdotter, Frida; Andersson, David; Kruse, Andreas; Cowell, Chris T; Toelle, Brett G; Marks, Guy B; Almqvist, Catarina

    2017-04-01

    Understanding the associations between childhood asthma and growth in early adolescence by accounting for the heterogeneity of growth during puberty has been largely unexplored. The objective was to identify sex-specific classes of growth trajectories during early adolescence, using a method which takes the heterogeneity of growth into account and to evaluate the association between childhood asthma and different classes of growth trajectories in adolescence. Our longitudinal study included participants with a family history of asthma born during 1997-1999 in Sydney, Australia. Hence, all participants were at high risk for asthma. Asthma status was ascertained at 8 years of age using data from questionnaires and lung function tests. Growth trajectories between 11 and 14 years of age were classified using a latent basis growth mixture model. Multinomial regression analyses were used to evaluate the association between asthma and the categorized classes of growth trajectories. In total, 316 participants (51.6% boys), representing 51.3% of the entire cohort, were included. Sex-specific classes of growth trajectories were defined. Among boys, asthma was not associated with the classes of growth trajectories. Girls with asthma were more likely than girls without asthma to belong to a class with later growth (OR: 3.79, 95% CI: 1.33, 10.84). Excluding participants using inhaled corticosteroids or adjusting for confounders did not significantly change the results for either sex. We identified sex-specific heterogeneous classes of growth using growth mixture modelling. Associations between childhood asthma and different classes of growth trajectories were found for girls only. © 2016 Asian Pacific Society of Respirology.

  18. Exploring 2.5-Year Trajectories of Functional Decline in Older Adults by Applying a Growth Mixture Model and Frequency of Outings as a Predictor: A 2010-2013 JAGES Longitudinal Study.

    PubMed

    Saito, Junko; Kondo, Naoki; Saito, Masashige; Takagi, Daisuke; Tani, Yukako; Haseda, Maho; Tabuchi, Takahiro; Kondo, Katsunori

    2018-06-23

    We explored the distinct trajectories of functional decline among older adults in Japan, and evaluated whether the frequency of outings, an important indicator of social activity, predicts the identified trajectories. We analyzed data on 2,364 adults aged 65 years or older from the Japan Aichi Gerontological Evaluation Study. Participants were initially independent and later developed functional disability during a 31-month follow-up period. We used the level of long-term care needs certified in the public health insurance system as a proxy of functional ability and linked the fully tracked data of changes in the care levels to the baseline data. A low frequency of outings was defined as leaving one's home less than once per week at baseline. We applied a growth mixture model to identify trajectories in functional decline by sex and then examined the association between the frequency of outings and the identified trajectories using multinomial logistic regression analysis. Three distinct trajectories were identified: "slowly declining" (64.3% of men and 79.7% of women), "persistently disabled" (4.5% and 3.7%, respectively), and "rapidly declining" (31.3% and 16.6%, respectively). Men with fewer outings had 2.14 times greater odds (95% confidence interval, 1.03-4.41) of being persistently disabled. The association between outing frequency and functional decline trajectory was less clear statistically among women. While the majority of older adults showed a slow functional decline, some showed persistent moderate disability. Providing more opportunities to go out or assistance in that regard may be important for preventing persistent disability, and such needs might be greater among men.

  19. Prospective memory after moderate-to-severe traumatic brain injury: a multinomial modeling approach.

    PubMed

    Pavawalla, Shital P; Schmitter-Edgecombe, Maureen; Smith, Rebekah E

    2012-01-01

    Prospective memory (PM), which can be understood as the processes involved in realizing a delayed intention, is consistently found to be impaired after a traumatic brain injury (TBI). Although PM can be empirically dissociated from retrospective memory, it inherently involves both a prospective component (i.e., remembering that an action needs to be carried out) and retrospective components (i.e., remembering what action needs to be executed and when). This study utilized a multinomial processing tree model to disentangle the prospective (that) and retrospective recognition (when) components underlying PM after moderate-to-severe TBI. Seventeen participants with moderate to severe TBI and 17 age- and education-matched control participants completed an event-based PM task that was embedded within an ongoing computer-based color-matching task. The multinomial processing tree modeling approach revealed a significant group difference in the prospective component, indicating that the control participants allocated greater preparatory attentional resources to the PM task compared to the TBI participants. Participants in the TBI group were also found to be significantly more impaired than controls in the when aspect of the retrospective component. These findings indicated that the TBI participants had greater difficulty allocating the necessary preparatory attentional resources to the PM task and greater difficulty discriminating between PM targets and nontargets during task execution, despite demonstrating intact posttest recall and/or recognition of the PM tasks and targets.

  20. An Empirical Bayes Estimate of Multinomial Probabilities.

    DTIC Science & Technology

    1982-02-01

    multinomial probabilities has been considered from a decision theoretic point of view by Steinhaus (1957), Trybula (1958) and Rutkowska (1977). In a recent...variate Rypergeometric and Multinomial Distributions," Zastosowania Matematyki, 16, 9-21. Steinhaus , H. (1957), "The Problem of Estimation." Annals of

  1. Evaluating risk factors for endemic human Salmonella Enteritidis infections with different phage types in Ontario, Canada using multinomial logistic regression and a case-case study approach

    PubMed Central

    2012-01-01

    Background Identifying risk factors for Salmonella Enteritidis (SE) infections in Ontario will assist public health authorities to design effective control and prevention programs to reduce the burden of SE infections. Our research objective was to identify risk factors for acquiring SE infections with various phage types (PT) in Ontario, Canada. We hypothesized that certain PTs (e.g., PT8 and PT13a) have specific risk factors for infection. Methods Our study included endemic SE cases with various PTs whose isolates were submitted to the Public Health Laboratory-Toronto from January 20th to August 12th, 2011. Cases were interviewed using a standardized questionnaire that included questions pertaining to demographics, travel history, clinical symptoms, contact with animals, and food exposures. A multinomial logistic regression method using the Generalized Linear Latent and Mixed Model procedure and a case-case study design were used to identify risk factors for acquiring SE infections with various PTs in Ontario, Canada. In the multinomial logistic regression model, the outcome variable had three categories representing human infections caused by SE PT8, PT13a, and all other SE PTs (i.e., non-PT8/non-PT13a) as a referent category to which the other two categories were compared. Results In the multivariable model, SE PT8 was positively associated with contact with dogs (OR=2.17, 95% CI 1.01-4.68) and negatively associated with pepper consumption (OR=0.35, 95% CI 0.13-0.94), after adjusting for age categories and gender, and using exposure periods and health regions as random effects to account for clustering. Conclusions Our study findings offer interesting hypotheses about the role of phage type-specific risk factors. Multinomial logistic regression analysis and the case-case study approach are novel methodologies to evaluate associations among SE infections with different PTs and various risk factors. PMID:23057531

  2. Making sense of sparse rating data in collaborative filtering via topographic organization of user preference patterns.

    PubMed

    Polcicová, Gabriela; Tino, Peter

    2004-01-01

    We introduce topographic versions of two latent class models (LCM) for collaborative filtering. Latent classes are topologically organized on a square grid. Topographic organization of latent classes makes orientation in rating/preference patterns captured by the latent classes easier and more systematic. The variation in film rating patterns is modelled by multinomial and binomial distributions with varying independence assumptions. In the first stage of topographic LCM construction, self-organizing maps with neural field organized according to the LCM topology are employed. We apply our system to a large collection of user ratings for films. The system can provide useful visualization plots unveiling user preference patterns buried in the data, without loosing potential to be a good recommender model. It appears that multinomial distribution is most adequate if the model is regularized by tight grid topologies. Since we deal with probabilistic models of the data, we can readily use tools from probability and information theories to interpret and visualize information extracted by our system.

  3. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  4. Statistical Development and Application of Cultural Consensus Theory

    DTIC Science & Technology

    2012-03-31

    Bulletin & Review , 17, 275-286. Schmittmann, V.D., Dolan, C.V., Raijmakers, M.E.J., and Batchelder, W.H. (2010). Parameter identification in...Wu, H., Myung, J.I., and Batchelder, W.H. (2010). Minimum description length model selection of multinomial processing tree models. Psychonomic

  5. The empathy impulse: A multinomial model of intentional and unintentional empathy for pain.

    PubMed

    Cameron, C Daryl; Spring, Victoria L; Todd, Andrew R

    2017-04-01

    Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine-other) perspective-taking uniquely increased Unintentional Empathy. In Experiment 3, Intentional and Unintentional Empathy were stronger toward targets with typical (vs. atypical) pain outcomes, suggesting that outcome information matters and that effects on the PIT are not reducible to affective priming. Typicality of pain outcomes more weakly affected task performance when target stimuli were merely categorized rather than judged for painfulness, suggesting that effects on the latter are not reducible to semantic priming. In Experiment 4, Unintentional Empathy was stronger for participants who engaged in costly donation to cancer charities, but this parameter was also high for those who donated to an objectively worse but socially more popular charity, suggesting that overly high empathy may facilitate maladaptive altruism. Theoretical and practical applications of our modeling approach for understanding variation in empathy are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Random Walks on a Simple Cubic Lattice, the Multinomial Theorem, and Configurational Properties of Polymers

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2007-01-01

    Random-climb models enable undergraduate chemistry students to visualize polymer molecules, quantify their configurational properties, and relate molecular structure to a variety of physical properties. The model could serve as an introduction to more elaborate models of polymer molecules and could help in learning topics such as lattice models of…

  7. Latent spatial models and sampling design for landscape genetics

    Treesearch

    Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...

  8. Application of a Multidimensional Nested Logit Model to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk

    2012-01-01

    Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…

  9. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  10. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  11. From margarine to butter: predictors of changing bread spread in an 11-year population follow-up.

    PubMed

    Prättälä, Ritva; Levälahti, Esko; Lallukka, Tea; Männistö, Satu; Paalanen, Laura; Raulio, Susanna; Roos, Eva; Suominen, Sakari; Mäki-Opas, Tomi

    2016-06-01

    Finland is known for a sharp decrease in the intake of saturated fat and cardiovascular mortality. Since 2000, however, the consumption of butter-containing spreads - an important source of saturated fats - has increased. We examined social and health-related predictors of the increase among Finnish men and women. An 11-year population follow-up. A representative random sample of adult Finns, invited to a health survey in 2000. Altogether 5414 persons aged 30-64 years at baseline in 2000 were re-invited in 2011. Of men 1529 (59 %) and of women 1853 (66 %) answered the questions on bread spreads at both time points. Respondents reported the use of bread spreads by choosing one of the following alternatives: no fat, soft margarine, butter-vegetable oil mixture and butter, which were later categorized into margarine/no spread and butter/butter-vegetable oil mixture (= butter). The predictors included gender, age, marital status, education, employment status, place of residence, health behaviours, BMI and health. Multinomial regression models were fitted. Of the 2582 baseline margarine/no spread users, 24.6% shifted to butter. Only a few of the baseline sociodemographic or health-related determinants predicted the change. Finnish women were more likely to change to butter than men. Living with a spouse predicted the change among men. The change from margarine to butter between 2000 and 2011 seemed not to be a matter of compliance with official nutrition recommendations. Further longitudinal studies on social, behavioural and motivational predictors of dietary changes are needed.

  12. A multilevel model for comorbid outcomes: obesity and diabetes in the US.

    PubMed

    Congdon, Peter

    2010-02-01

    Multilevel models are overwhelmingly applied to single health outcomes, but when two or more health conditions are closely related, it is important that contextual variation in their joint prevalence (e.g., variations over different geographic settings) is considered. A multinomial multilevel logit regression approach for analysing joint prevalence is proposed here that includes subject level risk factors (e.g., age, race, education) while also taking account of geographic context. Data from a US population health survey (the 2007 Behavioral Risk Factor Surveillance System or BRFSS) are used to illustrate the method, with a six category multinomial outcome defined by diabetic status and weight category (obese, overweight, normal). The influence of geographic context is partly represented by known geographic variables (e.g., county poverty), and partly by a model for latent area influences. In particular, a shared latent variable (common factor) approach is proposed to measure the impact of unobserved area influences on joint weight and diabetes status, with the latent variable being spatially structured to reflect geographic clustering in risk.

  13. Using species spectra to evaluate plant community conservation value along a gradient of anthropogenic disturbance.

    PubMed

    Marcelino, José A P; Silva, Luís; Garcia, Patricia V; Weber, Everett; Soares, António O

    2013-08-01

    The aim of this study was to assess the impact of anthropogenic disturbance on the partitioning of plant communities (species spectra) across a landcover gradient of community types, categorizing species on the basis of their biogeographic, ecological, and conservation status. We tested a multinomial model to generate species spectra and monitor changes in plant assemblages as anthropogenic disturbance rise, as well as the usefulness of this method to assess the conservation value of a given community. Herbaceous and arborescent communities were sampled in five Azorean islands. Margins were also sampled to account for edge effects. Different multinomial models were applied to a data set of 348 plant species accounting for differences in parameter estimates among communities and/or islands. Different levels of anthropogenic disturbance produced measurable changes on species spectra. Introduced species proliferated and indigenous species declined, as anthropogenic disturbance and management intensity increased. Species assemblages of relevance other than economic (i.e., native, endemic, threatened species) were enclosed not only in natural habitats, but also in human managed arborescent habitats, which can positively contribute for the preservation of indigenous species outside remnants of natural areas, depending on management strategies. A significant presence of invasive species in margin transects of most community types will contribute to an increase in edge effect that might facilitate invasion. The multinomial model developed in this study was found to be a novel and expedient tool to characterize the species spectra at a given community and its use could be extrapolated for other assemblages or organisms, in order to evaluate and forecast the conservation value of a site.

  14. Bayesian Network Meta-Analysis for Unordered Categorical Outcomes with Incomplete Data

    ERIC Educational Resources Information Center

    Schmid, Christopher H.; Trikalinos, Thomas A.; Olkin, Ingram

    2014-01-01

    We develop a Bayesian multinomial network meta-analysis model for unordered (nominal) categorical outcomes that allows for partially observed data in which exact event counts may not be known for each category. This model properly accounts for correlations of counts in mutually exclusive categories and enables proper comparison and ranking of…

  15. Beyond ROC Curvature: Strength Effects and Response Time Data Support Continuous-Evidence Models of Recognition Memory

    ERIC Educational Resources Information Center

    Dube, Chad; Starns, Jeffrey J.; Rotello, Caren M.; Ratcliff, Roger

    2012-01-01

    A classic question in the recognition memory literature is whether retrieval is best described as a continuous-evidence process consistent with signal detection theory (SDT), or a threshold process consistent with many multinomial processing tree (MPT) models. Because receiver operating characteristics (ROCs) based on confidence ratings are…

  16. Leavers, Movers, and Stayers: The Role of Workplace Conditions in Teacher Mobility Decisions

    ERIC Educational Resources Information Center

    Kukla-Acevedo, Sharon

    2009-01-01

    The author explored whether 3 workplace conditions were related to teacher mobility decisions. The modeling strategy incorporated a series of binomial and multinomial logistic models to estimate the effects of administrative support, classroom control, and behavioral climate on teachers' decisions to quit teaching or switch schools. The results…

  17. Multinomial Bayesian learning for modeling classical and nonclassical receptive field properties.

    PubMed

    Hosoya, Haruo

    2012-08-01

    We study the interplay of Bayesian inference and natural image learning in a hierarchical vision system, in relation to the response properties of early visual cortex. We particularly focus on a Bayesian network with multinomial variables that can represent discrete feature spaces similar to hypercolumns combining minicolumns, enforce sparsity of activation to learn efficient representations, and explain divisive normalization. We demonstrate that maximal-likelihood learning using sampling-based Bayesian inference gives rise to classical receptive field properties similar to V1 simple cells and V2 cells, while inference performed on the trained network yields nonclassical context-dependent response properties such as cross-orientation suppression and filling in. Comparison with known physiological properties reveals some qualitative and quantitative similarities.

  18. Dietary Fiber Intake Is Inversely Associated with Periodontal Disease among US Adults.

    PubMed

    Nielsen, Samara Joy; Trak-Fellermeier, Maria Angelica; Joshipura, Kaumudi; Dye, Bruce A

    2016-12-01

    Approximately 47% of adults in the United States have periodontal disease. Dietary guidelines recommend a diet providing adequate fiber. Healthier dietary habits, particularly an increased fiber intake, may contribute to periodontal disease prevention. Our objective was to evaluate the relation of dietary fiber intake and its sources with periodontal disease in the US adult population (≥30 y of age). Data from 6052 adults participating in NHANES 2009-2012 were used. Periodontal disease was defined (according to the CDC/American Academy of Periodontology) as severe, moderate, mild, and none. Intake was assessed by 24-h dietary recalls. The relation between periodontal disease and dietary fiber, whole-grain, and fruit and vegetable intakes were evaluated by using multivariate models, adjusting for sociodemographic characteristics and dentition status. In the multivariate logistic model, the lowest quartile of dietary fiber was associated with moderate-severe periodontitis (compared with mild-none) compared with the highest dietary fiber intake quartile (OR: 1.30; 95% CI: 1.00, 1.69). In the multivariate multinomial logistic model, intake in the lowest quartile of dietary fiber was associated with higher severity of periodontitis than dietary fiber intake in the highest quartile (OR: 1.27; 95% CI: 1.00, 1.62). In the adjusted logistic model, whole-grain intake was not associated with moderate-severe periodontitis. However, in the adjusted multinomial logistic model, adults consuming whole grains in the lowest quartile were more likely to have more severe periodontal disease than were adults consuming whole grains in the highest quartile (OR: 1.32; 95% CI: 1.08, 1.62). In fully adjusted logistic and multinomial logistic models, fruit and vegetable intake was not significantly associated with periodontitis. We found an inverse relation between dietary fiber intake and periodontal disease among US adults ≥30 y old. Periodontal disease was associated with low whole-grain intake but not with low fruit and vegetable intake. © 2016 American Society for Nutrition.

  19. "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain": Correction.

    PubMed

    2018-04-01

    Reports an error in "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain" by C. Daryl Cameron, Victoria L. Spring and Andrew R. Todd ( Emotion , 2017[Apr], Vol 17[3], 395-411). In this article, there was an error in the calculation of some of the effect sizes. The w effect size was manually computed incorrectly. The incorrect number of total observations was used, which affected the final effect size estimates. This computing error does not change any of the results or interpretations about model fit based on the G² statistic, or about significant differences across conditions in process parameters. Therefore, it does not change any of the hypothesis tests or conclusions. The w statistics for overall model fit should be .02 instead of .04 in Study 1, .01 instead of .02 in Study 2, .01 instead of .03 for the OIT in Study 3 (model fit for the PIT remains the same: .00), and .02 instead of .03 in Study 4. The corrected tables can be seen here: http://osf.io/qebku at the Open Science Framework site for the article. (The following abstract of the original article appeared in record 2017-01641-001.) Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine-other) perspective-taking uniquely increased Unintentional Empathy. In Experiment 3, Intentional and Unintentional Empathy were stronger toward targets with typical (vs. atypical) pain outcomes, suggesting that outcome information matters and that effects on the PIT are not reducible to affective priming. Typicality of pain outcomes more weakly affected task performance when target stimuli were merely categorized rather than judged for painfulness, suggesting that effects on the latter are not reducible to semantic priming. In Experiment 4, Unintentional Empathy was stronger for participants who engaged in costly donation to cancer charities, but this parameter was also high for those who donated to an objectively worse but socially more popular charity, suggesting that overly high empathy may facilitate maladaptive altruism. Theoretical and practical applications of our modeling approach for understanding variation in empathy are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Factors Associated with Substance Use in Adolescents with Eating Disorders

    PubMed Central

    Mann, Andrea P; Accurso, Erin C.; Stiles-Shields, Colleen; Capra, Lauren; Labuschagne, Zandre; Karnik, Niranjan S.; Grange, Daniel Le

    2014-01-01

    Purpose To examine the prevalence and potential risk factors associated with substance use in adolescents with eating disorders (EDs). Methods This cross-sectional study included 290 adolescents, ages 12 –18 years, who presented for an initial ED evaluation at The Eating Disorders Program at The University of Chicago Medicine (UCM) between 2001 and 2012. Several factors, including DSM-5 diagnosis, diagnostic scores, and demographic characteristics were examined. Multinomial logistic regression was used to test associations between several factors and patterns of drug use for alcohol, cannabis, tobacco, and any substance. Results Lifetime prevalence of any substance use was found to be 24.6% in those with anorexia nervosa (AN), 48.7% in bulimia nervosa (BN), and 28.6% in eating disorder not otherwise specified (EDNOS). Regular substance use (monthly, daily, and bingeing behaviors) or a substance use disorder (SUD) was found in 27.9% of all patients. Older age was the only factor associated with regular use of any substance in the final multinomial model. Older age and non-White race was associated with greater alcohol and cannabis use. Although binge-purge frequency and BN diagnosis were associated with regular substance use in bivariate analyses, gender, race and age were more robustly associated with substance use in the final multinomial models. Conclusions Co-morbid substance use in adolescents with EDs is an important issue. Interventions targeting high-risk groups reporting regular substance use or SUDs are needed. PMID:24656448

  2. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  3. Modeling the Distribution of Fingerprint Characteristics. Revision 1.

    DTIC Science & Technology

    1980-09-19

    the details of the print. The ridge-line details are termed Galton characteristics since Sir Francis Galton was among the first to study them...U.S.A. CONTENTS Abstract 1. Introduction 2. Background Information on Fingerprints 2.1. Types 2.2. Ridge counts 2.3. The Galton details 3. Data...The Multinomial Markov Model 7. The Poisson Markov Model 8. The Infinitely Divisible Model Acknowledgements References Appendices A The Galton

  4. Modeling of orthotropic plate fracture under impact load using various strength criteria

    NASA Astrophysics Data System (ADS)

    Radchenko, Andrey; Krivosheina, Marina; Kobenko, Sergei; Radchenko, Pavel; Grebenyuk, Grigory

    2017-01-01

    The paper presents the comparative analysis of various tensor multinomial criteria of strength for modeling of orthotropic organic plastic plate fracture under impact load. Ashkenazi, Hoffman and Wu strength criteria were used. They allowed fracture modeling of orthotropic materials with various compressive and tensile strength properties. The modeling of organic plastic fracture was performed numerically within the impact velocity range of 700-1500 m/s.

  5. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    ERIC Educational Resources Information Center

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  6. A Multinomial Logit Model of Attrition that Distinguishes between Stopout and Dropout Behavior

    ERIC Educational Resources Information Center

    Stratton, Leslie S.; O'Toole, Dennis M.; Wetzel, James N.

    2004-01-01

    College attrition rates are of substantial concern to policy makers and economists interested in educational attainment and earnings opportunities. This is not surprising since nationwide, almost one-third of all first-time college students fail to return for their sophomore year. There exists a substantial body of literature seeking to model this…

  7. A Multilevel Model for Comorbid Outcomes: Obesity and Diabetes in the US

    PubMed Central

    Congdon, Peter

    2010-01-01

    Multilevel models are overwhelmingly applied to single health outcomes, but when two or more health conditions are closely related, it is important that contextual variation in their joint prevalence (e.g., variations over different geographic settings) is considered. A multinomial multilevel logit regression approach for analysing joint prevalence is proposed here that includes subject level risk factors (e.g., age, race, education) while also taking account of geographic context. Data from a US population health survey (the 2007 Behavioral Risk Factor Surveillance System or BRFSS) are used to illustrate the method, with a six category multinomial outcome defined by diabetic status and weight category (obese, overweight, normal). The influence of geographic context is partly represented by known geographic variables (e.g., county poverty), and partly by a model for latent area influences. In particular, a shared latent variable (common factor) approach is proposed to measure the impact of unobserved area influences on joint weight and diabetes status, with the latent variable being spatially structured to reflect geographic clustering in risk. PMID:20616977

  8. Body mass index and employment status: A new look.

    PubMed

    Kinge, Jonas Minet

    2016-09-01

    Earlier literature has usually modelled the impact of obesity on employment status as a binary choice (employed, yes/no). I provide new evidence on the impact of obesity on employment status by treating the dependent variable as a as a multinomial choice variable. Using data from a representative English survey, with measured height and weight on parents and children, I define employment status as one of four: working; looking for paid work; permanently not working due to disability; and, looking after home or family. I use a multinomial logit model controlling for a set of covariates. I also run instrumental variable models, instrumenting for Body Mass Index (BMI) based on genetic variation in weight. I find that BMI and obesity significantly increase the probability of "not working due to disability". The results for the other employment outcomes are less clear. My findings also indicate that BMI affects employment through its effect on health. Factors other than health may be less important in explaining the impact of BMI/obesity on employment. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A general equation to obtain multiple cut-off scores on a test from multinomial logistic regression.

    PubMed

    Bersabé, Rosa; Rivas, Teresa

    2010-05-01

    The authors derive a general equation to compute multiple cut-offs on a total test score in order to classify individuals into more than two ordinal categories. The equation is derived from the multinomial logistic regression (MLR) model, which is an extension of the binary logistic regression (BLR) model to accommodate polytomous outcome variables. From this analytical procedure, cut-off scores are established at the test score (the predictor variable) at which an individual is as likely to be in category j as in category j+1 of an ordinal outcome variable. The application of the complete procedure is illustrated by an example with data from an actual study on eating disorders. In this example, two cut-off scores on the Eating Attitudes Test (EAT-26) scores are obtained in order to classify individuals into three ordinal categories: asymptomatic, symptomatic and eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalises DSM-IV criteria for eating disorders. Alternatives to the MLR model to set multiple cut-off scores are discussed.

  10. Multi-scale biomarker evaluation of the toxicity of a commercial azo dye (Disperse Red 1) in an animal model, the freshwater cnidarian Hydra attenuata.

    PubMed

    de Jong, Laetitia; Pech, Nicolas; de Aragão Umbuzeiro, Gisela; Moreau, Xavier

    2016-06-01

    Acute (24 h, 48 h, 72 h) and chronic (7 days) tests have been performed to evaluate the effects of the commercial azo dye Disperse Red 1 (DR1) using various biomarkers in the freshwater invertebrate Hydra attenuata. Morphological changes have been selected to calculate ecotoxicological thresholds for sublethal and lethal DR1 concentrations. A multinomial logistic model showed that the probability of each morphological stage occurrence was function of concentration, time and interaction between both. Results of oxidative balance parameter measurements (72 h and 7 days) suggest that polyps set up defense mechanisms to limit lipid peroxidation caused by DR1. DR1 exposure at hormetic concentrations induces increase of asexual reproductive rates. This result suggests (1) an impact on the fitness-related phenotypical traits and (2) trade-offs between reproduction and maintenance to allow the population to survive harsher conditions. Changes in serotonin immuno-labeling in polyps showing alterations in feeding behavior suggest that chronic DR1 exposure impaired neuronal processes related to ingesting behavior in H. attenuata. This ecotoxicity study sheds light on the possible serotonin function in Hydra model and reports for the first time that serotonin could play a significant role in feeding behavior. This study used a multi-scale biomarker approach investigating biochemical, morphological, reproductive and behavioral endpoints in Hydra attenuata. This organism is proposed for a pertinent animal model to assess ecotoxicological impact of pollutant mixtures in freshwater environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Use of negative multinomial linear models to investigate environmental effects on community structure.

    EPA Science Inventory

    A frequent goal in ecology is to understand the relationships between biological communities and their environment. Anderson and McCardle (2001) provided a nonparametric method, known as Permanova, that is often used for this purpose. Permanova represents a significant advance,...

  12. Mapping CHU9D Utility Scores from the PedsQLTM 4.0 SF-15.

    PubMed

    Mpundu-Kaambwa, Christine; Chen, Gang; Russo, Remo; Stevens, Katherine; Petersen, Karin Dam; Ratcliffe, Julie

    2017-04-01

    The Pediatric Quality of Life Inventory™ 4.0 Short Form 15 Generic Core Scales (hereafter the PedsQL) and the Child Health Utility-9 Dimensions (CHU9D) are two generic instruments designed to measure health-related quality of life in children and adolescents in the general population and paediatric patient groups living with specific health conditions. Although the PedsQL is widely used among paediatric patient populations, presently it is not possible to directly use the scores from the instrument to calculate quality-adjusted life-years (QALYs) for application in economic evaluation because it produces summary scores which are not preference-based. This paper examines different econometric mapping techniques for estimating CHU9D utility scores from the PedsQL for the purpose of calculating QALYs for cost-utility analysis. The PedsQL and the CHU9D were completed by a community sample of 755 Australian adolescents aged 15-17 years. Seven regression models were estimated: ordinary least squares estimator, generalised linear model, robust MM estimator, multivariate factorial polynomial estimator, beta-binomial estimator, finite mixture model and multinomial logistic model. The mean absolute error (MAE) and the mean squared error (MSE) were used to assess predictive ability of the models. The MM estimator with stepwise-selected PedsQL dimension scores as explanatory variables had the best predictive accuracy using MAE and the equivalent beta-binomial model had the best predictive accuracy using MSE. Our mapping algorithm facilitates the estimation of health-state utilities for use within economic evaluations where only PedsQL data is available and is suitable for use in community-based adolescents aged 15-17 years. Applicability of the algorithm in younger populations should be assessed in further research.

  13. A Typology of Work-Family Arrangements among Dual-Earner Couples in Norway

    ERIC Educational Resources Information Center

    Kitterod, Ragni Hege; Lappegard, Trude

    2012-01-01

    A symmetrical family model of two workers or caregivers is a political goal in many western European countries. We explore how common this family type is in Norway, a country with high gender-equality ambitions, by using a multinomial latent class model to develop a typology of dual-earner couples with children based on the partners' allocations…

  14. Limited-Information Goodness-of-Fit Testing of Diagnostic Classification Item Response Theory Models. CRESST Report 840

    ERIC Educational Resources Information Center

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2014-01-01

    It is a well-known problem in testing the fit of models to multinomial data that the full underlying contingency table will inevitably be sparse for tests of reasonable length and for realistic sample sizes. Under such conditions, full-information test statistics such as Pearson's X[superscript 2]?? and the likelihood ratio statistic…

  15. The Design and Analysis of Salmonid Tagging Studies in the Columbia Basin; Volume XII; A Multinomial Model for Estimating Ocean Survival from Salmonid Coded Wire-Tag Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryding, Kristen E.; Skalski, John R.

    1999-06-01

    The purpose of this report is to illustrate the development of a stochastic model using coded wire-tag (CWT) release and age-at-return data, in order to regress first year ocean survival probabilities against coastal ocean conditions and climate covariates.

  16. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  17. Evaluating the Locational Attributes of Education Management Organizations (EMOs)

    ERIC Educational Resources Information Center

    Gulosino, Charisse; Miron, Gary

    2017-01-01

    This study uses logistic and multinomial logistic regression models to analyze neighborhood factors affecting EMO (Education Management Organization)-operated schools' locational attributes (using census tracts) in 41 states for the 2014-2015 school year. Our research combines market-based school reform, institutional theory, and resource…

  18. Choice-Based Segmentation as an Enrollment Management Tool

    ERIC Educational Resources Information Center

    Young, Mark R.

    2002-01-01

    This article presents an approach to enrollment management based on target marketing strategies developed from a choice-based segmentation methodology. Students are classified into "switchable" or "non-switchable" segments based on their probability of selecting specific majors. A modified multinomial logit choice model is used to identify…

  19. Trajectories of change in symptom distress in a clinical group of late adolescents: The role of maladaptive personality traits and relations with parents.

    PubMed

    Koster, Nagila; Laceulle, Odilia; van der Heijden, Paul; de Clercq, Barbara; van Aken, Marcel

    2018-03-25

    In this study, it was analysed whether trajectories of change in symptom distress could be identified in a clinical group of late adolescents with personality pathology. Furthermore, it was examined whether maladaptive personality traits and relations with parents were predictive of following one of these trajectories. Three latent classes emerged from growth mixture modelling with a symptom inventory (n = 911): a Stable High, a Strong Decreasing and a Moderate Decreasing trajectory. Subsequently, by using multinomial logistic regression analyses in a subsample of late-adolescents (n = 127), it was revealed that high levels of Negative Affectivity and Detachment were predictive of following the Strong Decreasing, and high levels of Detachment were predictive of following the Stable High trajectory. Support from or Negative Interactions with parents were not predictive of any of the trajectories. The current results contribute to the notion of individual trajectories of change in symptom distress and provide suggestions for screening patients on personality traits to gain insight in the course of this change. © 2018 The Authors Personality and Mental Health Published by John Wiley & Sons Ltd. © 2018 The Authors Personality and Mental Health Published by John Wiley & Sons Ltd.

  20. Bayesian multinomial probit modeling of daily windows of ...

    EPA Pesticide Factsheets

    Past epidemiologic studies suggest maternal ambient air pollution exposure during critical periods of the pregnancy is associated with fetal development. We introduce a multinomial probit model that allows for the joint identification of susceptible daily periods during the pregnancy for 12 individual types of CHDs with respect to maternal PM2.5 exposure. We apply the model to a dataset of mothers from the National Birth Defect Prevention Study where daily PM2.5 exposures from weeks 2-8 of pregnancy are assigned (specific to each location and pregnancy date) using predictions from the downscaler pollution model. Results are compared to an aggregated exposure model which defines exposure as the average value over pregnancy weeks 2-8. Increased PM2.5 exposure during pregnancy days 53 and 50-51 for pulmonary valve stenosis and tetralogy of Fallot, respectively, are associated with an increased probability of development of each CHD. The largest estimated effect is seen for atrioventricular septal defects on pregnancy day 14. The aggregated exposure model fails to identify any significant windows of susceptibility during pregnancy weeks 2-8 for the considered CHDs. Considering daily PM2.5 exposures in a new modeling framework revealed positive associations for defects that the standard aggregated exposure model was unable to identify. Disclaimer: The views expressed in this manuscript are those of the authors and do not necessarily represent the views or policie

  1. Bayesian correlated clustering to integrate multiple datasets

    PubMed Central

    Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.

    2012-01-01

    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558

  2. Patient choice modelling: how do patients choose their hospitals?

    PubMed

    Smith, Honora; Currie, Christine; Chaiwuttisak, Pornpimol; Kyprianou, Andreas

    2018-06-01

    As an aid to predicting future hospital admissions, we compare use of the Multinomial Logit and the Utility Maximising Nested Logit models to describe how patients choose their hospitals. The models are fitted to real data from Derbyshire, United Kingdom, which lists the postcodes of more than 200,000 admissions to six different local hospitals. Both elective and emergency admissions are analysed for this mixed urban/rural area. For characteristics that may affect a patient's choice of hospital, we consider the distance of the patient from the hospital, the number of beds at the hospital and the number of car parking spaces available at the hospital, as well as several statistics publicly available on National Health Service (NHS) websites: an average waiting time, the patient survey score for ward cleanliness, the patient safety score and the inpatient survey score for overall care. The Multinomial Logit model is successfully fitted to the data. Results obtained with the Utility Maximising Nested Logit model show that nesting according to city or town may be invalid for these data; in other words, the choice of hospital does not appear to be preceded by choice of city. In all of the analysis carried out, distance appears to be one of the main influences on a patient's choice of hospital rather than statistics available on the Internet.

  3. Emotionally enhanced memory for negatively arousing words: storage or retrieval advantage?

    PubMed

    Nadarevic, Lena

    2017-12-01

    People typically remember emotionally negative words better than neutral words. Two experiments are reported that investigate whether emotionally enhanced memory (EEM) for negatively arousing words is based on a storage or retrieval advantage. Participants studied non-word-word pairs that either involved negatively arousing or neutral target words. Memory for these target words was tested by means of a recognition test and a cued-recall test. Data were analysed with a multinomial model that allows the disentanglement of storage and retrieval processes in the present recognition-then-cued-recall paradigm. In both experiments the multinomial analyses revealed no storage differences between negatively arousing and neutral words but a clear retrieval advantage for negatively arousing words in the cued-recall test. These findings suggest that EEM for negatively arousing words is driven by associative processes.

  4. Race and Unemployment: Labor Market Experiences of Black and White Men, 1968-1988.

    ERIC Educational Resources Information Center

    Wilson, Franklin D.; And Others

    1995-01-01

    Estimation of multinomial logistic regression models on a sample of unemployed workers suggested that persistently higher black unemployment is due to differential access to employment opportunities by region, occupational placement, labor market segmentation, and discrimination. The racial gap in unemployment is greatest for college-educated…

  5. Test Design Project: Studies in Test Adequacy. Annual Report.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…

  6. Predicting the Frequency of Senior Center Attendance.

    ERIC Educational Resources Information Center

    Miner, Sonia; And Others

    1993-01-01

    Used data from 1984 Supplement on Aging of the National Health Interview Survey to examine frequency of senior center attendance. Estimated multinomial logistic regression model to distinguish between persons who rarely, sometimes, and frequently attend. Found that more frequent users are older. Greater frequency was associated with lower income…

  7. Multidimensional Computerized Adaptive Testing for Indonesia Junior High School Biology

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei

    2015-01-01

    This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…

  8. Diversity and Educational Benefits: Moving Beyond Self-Reported Questionnaire Data

    ERIC Educational Resources Information Center

    Herzog, Serge

    2007-01-01

    Effects of ethnic/racial diversity among students and faculty on cognitive growth of undergraduate students are estimated via a series of hierarchical linear and multinomial logistic regression models. Using objective measures of compositional, curricular, and interactional diversity based on actuarial course enrollment records of over 6,000…

  9. Poverty and Material Hardship in Grandparent-Headed Households

    ERIC Educational Resources Information Center

    Baker, Lindsey A.; Mutchler, Jan E.

    2010-01-01

    Using the 2001 Survey of Income and Program Participation, the current study examines poverty and material hardship among children living in 3-generation (n = 486), skipped-generation (n = 238), single-parent (n = 2,076), and 2-parent (n = 6,061) households. Multinomial and logistic regression models indicated that children living in…

  10. Lifetime income patterns and alcohol consumption: Investigating the association between long- and short-term income trajectories and drinking

    PubMed Central

    Cerdá, Magdalena; Johnson-Lawrence, Vicki; Galea, Sandro

    2011-01-01

    Lifetime patterns of income may be an important driver of alcohol use. In this study, we evaluated the relationship between long-term and short-term measures of income and the relative odds of abstaining, drinking lightly-moderately and drinking heavily. We used data from the US Panel Study on Income Dynamics (PSID), a national population-based cohort that has been followed annually or biannually since 1968. We examined 3111 adult respondents aged 30-44 in 1997. Latent class growth mixture models with a censored normal distribution were used to estimate income trajectories followed by the respondent families from 1968-1997, while repeated measures multinomial generalized logit models estimated the odds of abstinence (no drinks per day) or heavy drinking (at least 3 drinks a day), relative to light/moderate drinking (<1-2 drinks a day), in 1999-2003. Lower income was associated with higher odds of abstinence and of heavy drinking, relative to light/moderate drinking. For example, belonging to a household with stable low income ($11-20,000) over 30 years was associated with 1.57 odds of abstinence, and 2.14 odds of heavy drinking in adulthood. The association between lifetime income patterns and alcohol use decreased in magnitude and became non-significant once we controlled for past-year income, education and occupation. Lifetime income patterns may have an indirect association with alcohol use, mediated through current socioeconomic conditions. PMID:21890256

  11. An econometric analysis of changes in arable land utilization using multinomial logit model in Pinggu district, Beijing, China.

    PubMed

    Xu, Yueqing; McNamara, Paul; Wu, Yanfang; Dong, Yue

    2013-10-15

    Arable land in China has been decreasing as a result of rapid population growth and economic development as well as urban expansion, especially in developed regions around cities where quality farmland quickly disappears. This paper analyzed changes in arable land utilization during 1993-2008 in the Pinggu district, Beijing, China, developed a multinomial logit (MNL) model to determine spatial driving factors influencing arable land-use change, and simulated arable land transition probabilities. Land-use maps, as well as social-economic and geographical data were used in the study. The results indicated that arable land decreased significantly between 1993 and 2008. Lost arable land shifted into orchard, forestland, settlement, and transportation land. Significant differences existed for arable land transitions among different landform areas. Slope, elevation, population density, urbanization rate, distance to settlements, and distance to roadways were strong drivers influencing arable land transition to other uses. The MNL model was proved effective for predicting transition probabilities in land use from arable land to other land-use types, thus can be used for scenario analysis to develop land-use policies and land-management measures in this metropolitan area. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  13. A Simplified Conjoint Recognition Paradigm for the Measurement of Gist and Verbatim Memory

    ERIC Educational Resources Information Center

    Stahl, Christoph; Klauer, Karl Christoph

    2008-01-01

    The distinction between verbatim and gist memory traces has furthered the understanding of numerous phenomena in various fields, such as false memory research, research on reasoning and decision making, and cognitive development. To measure verbatim and gist memory empirically, an experimental paradigm and multinomial measurement model has been…

  14. Early Family Formation among White, Black, and Mexican American Women

    ERIC Educational Resources Information Center

    Landale, Nancy S.; Schoen, Robert; Daniels, Kimberly

    2010-01-01

    Using data from Waves I and III of Add Health, this study examines early family formation among 6,144 White, Black, and Mexican American women. Drawing on cultural and structural perspectives, models of the first and second family transitions (cohabitation, marriage, or childbearing) are estimated using discrete-time multinomial logistic…

  15. Racial Threat and White Opposition to Bilingual Education in Texas

    ERIC Educational Resources Information Center

    Hempel, Lynn M.; Dowling, Julie A.; Boardman, Jason D.; Ellison, Christopher G.

    2013-01-01

    This study examines local contextual conditions that influence opposition to bilingual education among non-Hispanic Whites, net of individual-level characteristics. Data from the Texas Poll (N = 615) are used in conjunction with U.S. Census data to test five competing hypotheses using binomial and multinomial logistic regression models. Our…

  16. Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT

    ERIC Educational Resources Information Center

    Cheng, Ying; Chang, Hua-Hua; Yi, Qing

    2007-01-01

    Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…

  17. Profiles of Supportive Alumni: Donors, Volunteers, and Those Who "Do It All"

    ERIC Educational Resources Information Center

    Weerts, David J.; Ronca, Justin M.

    2007-01-01

    In the competitive marketplace of higher education, college and university alumni are increasingly called on to support their institutions in multiple ways: political advocacy, volunteerism, and charitable giving. Drawing on alumni survey data gathered from a large research extensive university, we employ a multinomial logistic regression model to…

  18. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  19. Estimation from incomplete multinomial data. Ph.D. Thesis - Harvard Univ.

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    The vector of multinomial cell probabilities was estimated from incomplete data, incomplete in that it contains partially classified observations. Each such partially classified observation was observed to fall in one of two or more selected categories but was not classified further into a single category. The data were assumed to be incomplete at random. The estimation criterion was minimization of risk for quadratic loss. The estimators were the classical maximum likelihood estimate, the Bayesian posterior mode, and the posterior mean. An approximation was developed for the posterior mean. The Dirichlet, the conjugate prior for the multinomial distribution, was assumed for the prior distribution.

  20. A panel multinomial logit analysis of elderly living arrangements: evidence from Aging In Manitoba longitudinal data, Canada.

    PubMed

    Sarma, Sisira; Simpson, Wayne

    2007-12-01

    Utilizing a unique longitudinal survey linked with home care use data, this paper analyzes the determinants of elderly living arrangements in Manitoba, Canada using a random effects multinomial logit model that accounts for unobserved individual heterogeneity. Because current home ownership is potentially endogenous in a living arrangements choice model, we use prior home ownership as an instrument. We also use prior home care use as an instrument for home care and use a random coefficient framework to account for unobserved health status. After controlling for relevant socio-demographic factors and accounting for unobserved individual heterogeneity, we find that home care and home ownership reduce the probability of living in a nursing home. Consistent with previous studies, we find that age is a strong predictor of nursing home entry. We also find that married people, those who have lived longer in the same community, and those who are healthy are more likely to live independently and less likely to be institutionalized or to cohabit with individuals other than their spouse.

  1. The perception of the relationship between environment and health according to data from Italian Behavioural Risk Factor Surveillance System (PASSI).

    PubMed

    Sampaolo, Letizia; Tommaso, Giulia; Gherardi, Bianca; Carrozzi, Giuliano; Freni Sterrantino, Anna; Ottone, Marta; Goldoni, Carlo Alberto; Bertozzi, Nicoletta; Scaringi, Meri; Bolognesi, Lara; Masocco, Maria; Salmaso, Stefania; Lauriola, Paolo

    2017-01-01

    "OBJECTIVES: to identify groups of people in relation to the perception of environmental risk and to assess the main characteristics using data collected in the environmental module of the surveillance network Italian Behavioral Risk Factor Surveillance System (PASSI). perceptive profiles were identified using a latent class analysis; later they were included as outcome in multinomial logistic regression models to assess the association between environmental risk perception and demographic, health, socio-economic and behavioural variables. the latent class analysis allowed to split the sample in "worried", "indifferent", and "positive" people. The multinomial logistic regression model showed that the "worried" profile typically includes people of Italian nationality, living in highly urbanized areas, with a high level of education, and with economic difficulties; they pay special attention to their own health and fitness, but they have a negative perception of their own psychophysical state. the application of advanced statistical analysis enable to appraise PASSI data in order to characterize the perception of environmental risk, making the planning of interventions related to risk communication possible. ".

  2. Analysis of crash proportion by vehicle type at traffic analysis zone level: A mixed fractional split multinomial logit modeling approach with spatial effects.

    PubMed

    Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing

    2018-02-01

    In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Pig Data and Bayesian Inference on Multinomial Probabilities

    ERIC Educational Resources Information Center

    Kern, John C.

    2006-01-01

    Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…

  4. Using Multidimensional Rasch Analysis to Validate the Chinese Version of the Motivated Strategies for Learning Questionnaire (MSLQ-CV)

    ERIC Educational Resources Information Center

    Lee, John Chi-Kin; Zhang, Zhonghua; Yin, Hongbiao

    2010-01-01

    This article used the multidimensional random coefficients multinomial logit model to examine the construct validity and detect the substantial differential item functioning (DIF) of the Chinese version of motivated strategies for learning questionnaire (MSLQ-CV). A total of 1,354 Hong Kong junior high school students were administered the…

  5. Persistent Nonmedical Use of Prescription Stimulants among College Students: Possible Association with ADHD Symptoms

    ERIC Educational Resources Information Center

    Arria, Amelia M.; Garnier-Dykstra, Laura M.; Caldeira, Kimberly M.; Vincent, Kathryn B.; O'Grady, Kevin E.; Wish, Eric D.

    2011-01-01

    Objective: To investigate the possible association between untreated ADHD symptoms (as measured by the Adult ADHD Self-Report Scale) and persistent nonmedical use of prescription stimulants. Method: Multinomial regression modeling was used to compare ADHD symptoms among three groups of college students enrolled in a longitudinal study over 4…

  6. A Multinomial Model for Identifying Significant Pure-Tone Threshold Shifts

    ERIC Educational Resources Information Center

    Schlauch, Robert S.; Carney, Edward

    2007-01-01

    Purpose: Significant threshold differences on retest for pure-tone audiometry are often evaluated by application of ad hoc rules, such as a shift in a pure-tone average or in 2 adjacent frequencies that exceeds a predefined amount. Rules that are so derived do not consider the probability of observing a particular audiogram. Methods: A general…

  7. How Framing Statistical Statements Affects Subjective Veracity: Validation and Application of a Multinomial Model for Judgments of Truth

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.

    2012-01-01

    Extending the well-established negativity bias in human cognition to truth judgments, it was recently shown that negatively framed statistical statements are more likely to be considered true than formally equivalent statements framed positively. However, the underlying processes responsible for this effect are insufficiently understood.…

  8. A Multilevel Study of Students' Motivations of Studying Accounting: Implications for Employers

    ERIC Educational Resources Information Center

    Law, Philip; Yuen, Desmond

    2012-01-01

    Purpose: The purpose of this study is to examine the influence of factors affecting students' choice of accounting as a study major in Hong Kong. Design/methodology/approach: Multinomial logistic regression and Hierarchical Generalized Linear Modeling (HGLM) are used to analyze the survey data for the level one and level two data, which is the…

  9. Behavioral and Emotional Strengths among Youth in Systems of Care and the Effect of Race/Ethnicity

    ERIC Educational Resources Information Center

    Barksdale, Crystal L.; Azur, Melissa; Daniels, Amy M.

    2010-01-01

    Behavioral and emotional strengths are important to consider when understanding youth mental health and treatment. This study examined the association between youth strengths and functional impairment and whether this association is modified by race/ethnicity. Multinomial logistic regression models were used to estimate the effects of strengths on…

  10. School Exits in the Milwaukee Parental Choice Program: Evidence of a Marketplace?

    ERIC Educational Resources Information Center

    Ford, Michael

    2011-01-01

    This article examines whether the large number of school exits from the Milwaukee school voucher program is evidence of a marketplace. Two logistic regression and multinomial logistic regression models tested the relation between the inability to draw large numbers of voucher students and the ability for a private school to remain viable. Data on…

  11. A Multinomial Logit Approach to Estimating Regional Inventories by Product Class

    Treesearch

    Lawrence Teeter; Xiaoping Zhou

    1998-01-01

    Current timber inventory projections generally lack information on inventory by product classes. Most models available for inventory projection and linked to supply analyses are limited to projecting aggregate softwood and hardwood. The objective of this research is to develop a methodology to distribute the volume on each FIA survey plot to product classes and...

  12. Generalized Partial Least Squares Approach for Nominal Multinomial Logit Regression Models with a Functional Covariate

    ERIC Educational Resources Information Center

    Albaqshi, Amani Mohammed H.

    2017-01-01

    Functional Data Analysis (FDA) has attracted substantial attention for the last two decades. Within FDA, classifying curves into two or more categories is consistently of interest to scientists, but multi-class prediction within FDA is challenged in that most classification tools have been limited to binary response applications. The functional…

  13. A General Family of Limited Information Goodness-of-Fit Statistics for Multinomial Data

    ERIC Educational Resources Information Center

    Joe, Harry; Maydeu-Olivares, Alberto

    2010-01-01

    Maydeu-Olivares and Joe (J. Am. Stat. Assoc. 100:1009-1020, "2005"; Psychometrika 71:713-732, "2006") introduced classes of chi-square tests for (sparse) multidimensional multinomial data based on low-order marginal proportions. Our extension provides general conditions under which quadratic forms in linear functions of cell residuals are…

  14. Brief Report: Association of Myositis Autoantibodies, Clinical Features, and Environmental Exposures at Illness Onset With Disease Course in Juvenile Myositis.

    PubMed

    Habers, G Esther A; Huber, Adam M; Mamyrova, Gulnara; Targoff, Ira N; O'Hanlon, Terrance P; Adams, Sharon; Pandey, Janardan P; Boonacker, Chantal; van Brussel, Marco; Miller, Frederick W; van Royen-Kerkhof, Annet; Rider, Lisa G

    2016-03-01

    To identify early factors associated with disease course in patients with juvenile idiopathic inflammatory myopathies (IIMs). Univariable and multivariable multinomial logistic regression analyses were performed in a large juvenile IIM registry (n = 365) and included demographic characteristics, early clinical features, serum muscle enzyme levels, myositis autoantibodies, environmental exposures, and immunogenetic polymorphisms. Multivariable associations with chronic or polycyclic courses compared to a monocyclic course included myositis-specific autoantibodies (multinomial odds ratio [OR] 4.2 and 2.8, respectively), myositis-associated autoantibodies (multinomial OR 4.8 and 3.5), and a documented infection within 6 months of illness onset (multinomial OR 2.5 and 4.7). A higher overall clinical symptom score at diagnosis was associated with chronic or monocyclic courses compared to a polycyclic course. Furthermore, severe illness onset was associated with a chronic course compared to monocyclic or polycyclic courses (multinomial OR 2.1 and 2.6, respectively), while anti-p155/140 autoantibodies were associated with chronic or polycyclic courses compared to a monocyclic course (multinomial OR 3.9 and 2.3, respectively). Additional univariable associations of a chronic course compared to a monocyclic course included photosensitivity, V-sign or shawl sign rashes, and cuticular overgrowth (OR 2.2-3.2). The mean ultraviolet index and highest ultraviolet index in the month before diagnosis were associated with a chronic course compared to a polycyclic course in boys (OR 1.5 and 1.3), while residing in the Northwest was less frequently associated with a chronic course (OR 0.2). Our findings indicate that myositis autoantibodies, in particular anti-p155/140, and a number of early clinical features and environmental exposures are associated with a chronic course in patients with juvenile IIM. These findings suggest that early factors, which are associated with poorer outcomes in juvenile IIM, can be identified. © 2016, American College of Rheumatology.

  15. Spatial distribution of the risk of dengue fever in southeast Brazil, 2006-2007

    PubMed Central

    2011-01-01

    Background Many factors have been associated with circulation of the dengue fever virus and vector, although the dynamics of transmission are not yet fully understood. The aim of this work is to estimate the spatial distribution of the risk of dengue fever in an area of continuous dengue occurrence. Methods This is a spatial population-based case-control study that analyzed 538 cases and 727 controls in one district of the municipality of Campinas, São Paulo, Brazil, from 2006-2007, considering socio-demographic, ecological, case severity, and household infestation variables. Information was collected by in-home interviews and inspection of living conditions in and around the homes studied. Cases were classified as mild or severe according to clinical data, and they were compared with controls through a multinomial logistic model. A generalized additive model was used in order to include space in a non-parametric fashion with cubic smoothing splines. Results Variables associated with increased incidence of all dengue cases in the multiple binomial regression model were: higher larval density (odds ratio (OR) = 2.3 (95%CI: 2.0-2.7)), reports of mosquito bites during the day (OR = 1.8 (95%CI: 1.4-2.4)), the practice of water storage at home (OR = 2.5 (95%CI: 1.4, 4.3)), low frequency of garbage collection (OR = 2.6 (95%CI: 1.6-4.5)) and lack of basic sanitation (OR = 2.9 (95%CI: 1.8-4.9)). Staying at home during the day was protective against the disease (OR = 0.5 (95%CI: 0.3-0.6)). When cases were analyzed by categories (mild and severe) in the multinomial model, age and number of breeding sites more than 10 were significant only for the occurrence of severe cases (OR = 0.97, (95%CI: 0.96-0.99) and OR = 2.1 (95%CI: 1.2-3.5), respectively. Spatial distribution of risks of mild and severe dengue fever differed from each other in the 2006/2007 epidemic, in the study area. Conclusions Age and presence of more than 10 breeding sites were significant only for severe cases. Other predictors of mild and severe cases were similar in the multiple models. The analyses of multinomial models and spatial distribution maps of dengue fever probabilities suggest an area-specific epidemic with varying clinical and demographic characteristics. PMID:21599980

  16. Outcome predictors for problem drinkers treated with combined cognitive behavioral therapy and naltrexone.

    PubMed

    Vuoristo-Myllys, Salla; Lipsanen, Jari; Lahti, Jari; Kalska, Hely; Alho, Hannu

    2014-03-01

    The opioid antagonist naltrexone, combined with cognitive behavioural therapy (CBT), has proven efficacious for patients with alcohol dependence, but studies examining how this treatment works in a naturalistic treatment setting are lacking. This study examined predictors of the outcome of targeted naltrexone and CBT in a real-life outpatient setting. Participants were 315 patients who attended a treatment program providing CBT combined with the targeted use of naltrexone. Mixture models for estimating developmental trajectories were used to examine change in patients' alcohol consumption and symptoms of alcohol craving from treatment entry until the end of the treatment (20 weeks) or dropout. Predictors of treatment outcome were examined with analyses of multinomial logistic regression. Minimal exclusion criteria were applied to enhance the generalizability of the findings. Regular drinking pattern, having no history of previous treatments, and high-risk alcohol consumption level before the treatment were associated with less change in alcohol use during the treatment. The patients with low-risk alcohol consumption level before the treatment had the most rapid reduction in alcohol craving. Patients who drank more alcohol during the treatment had lower adherence with naltrexone. Medication non-adherence is a major barrier to naltrexone's effectiveness in a real-life treatment setting. Patients with more severe alcohol problems may need more intensive treatment for achieving better treatment outcome in real-word treatment settings.

  17. Identifying patterns of item missing survey data using latent groups: an observational study

    PubMed Central

    McElwee, Paul; Nathan, Andrea; Burton, Nicola W; Turrell, Gavin

    2017-01-01

    Objectives To examine whether respondents to a survey of health and physical activity and potential determinants could be grouped according to the questions they missed, known as ‘item missing’. Design Observational study of longitudinal data. Setting Residents of Brisbane, Australia. Participants 6901 people aged 40–65 years in 2007. Materials and methods We used a latent class model with a mixture of multinomial distributions and chose the number of classes using the Bayesian information criterion. We used logistic regression to examine if participants’ characteristics were associated with their modal latent class. We used logistic regression to examine whether the amount of item missing in a survey predicted wave missing in the following survey. Results Four per cent of participants missed almost one-fifth of the questions, and this group missed more questions in the middle of the survey. Eighty-three per cent of participants completed almost every question, but had a relatively high missing probability for a question on sleep time, a question which had an inconsistent presentation compared with the rest of the survey. Participants who completed almost every question were generally younger and more educated. Participants who completed more questions were less likely to miss the next longitudinal wave. Conclusions Examining patterns in item missing data has improved our understanding of how missing data were generated and has informed future survey design to help reduce missing data. PMID:29084795

  18. Implicit moral evaluations: A multinomial modeling approach.

    PubMed

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    PubMed

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  20. Predictive occurrence models for coastal wetland plant communities: Delineating hydrologic response surfaces with multinomial logistic regression

    NASA Astrophysics Data System (ADS)

    Snedden, Gregg A.; Steyer, Gregory D.

    2013-02-01

    Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007-Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.

  1. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Fungi diversity from different depths and times in chicken manure waste static aerobic composting.

    PubMed

    Gu, Wenjie; Lu, Yusheng; Tan, Zhiyuan; Xu, Peizhi; Xie, Kaizhi; Li, Xia; Sun, Lili

    2017-09-01

    The Dirichlet multinomial mixtures mode was used to analyse illumina sequencing data to reveal both temporal and spatial variations of the fungi community present in the aerobic composting. Results showed that 670 operational taxonomic units (OTUs) were detected, and the dominant phylum was Ascomycota. There were four types of samples fungi communities during the composting process. Samples from the early composting stage were mainly grouped into type I and Saccharomycetales sp. was dominant. Fungi community in the medium composting stage were fallen into type II and III, Sordariales sp. and Acremonium alcalophilum, Saccharomycetales sp. and Scedosporium minutisporum were the dominant OTUs respectively. Samples from the late composting stage were mainly grouped into type IV and Scedosporium minutisporum was the dominant OTU; Scedosporium minutisporum was significantly affected by depth (P<0.05). Results indicate that time and depth both are factors that influence fungi distribution and variation in c waste during static aerobic composting. Copyright © 2017. Published by Elsevier Ltd.

  3. False Memory for Orthographically versus Semantically Similar Words in Adolescents with Dyslexia: A Fuzzy-Trace Theory Perspective

    ERIC Educational Resources Information Center

    Obidzinski, Michal; Nieznanski, Marek

    2017-01-01

    The presented research was conducted in order to investigate the connections between developmental dyslexia and the functioning of verbatim and gist memory traces--assumed in the fuzzy-trace theory. The participants were 71 high school students (33 with dyslexia and 38 without learning difficulties). The modified procedure and multinomial model of…

  4. The Effects of Secondary Special Education Preparation in Reading: Research to Inform State Policy in a New Era

    ERIC Educational Resources Information Center

    Knackstedt, Kimberly M.; Leko, Melinda M.; Siuty, Molly Baustien

    2018-01-01

    In this study, the authors present findings from a survey of 577 secondary special educators in a large Midwestern state regarding their reading pre-service and in-service teacher preparation and its effect on teachers' sense of preparedness for teaching reading to adolescents with disabilities. Six models were fitted using multinomial logistic…

  5. Multinomial-Regression Modeling of the Environmental Attitudes of Higher Education Students Based on the Revised New Ecological Paradigm Scale

    ERIC Educational Resources Information Center

    Jowett, Tim; Harraway, John; Lovelock, Brent; Skeaff, Sheila; Slooten, Liz; Strack, Mick; Shephard, Kerry

    2014-01-01

    Higher education is increasingly interested in its impact on the sustainability attributes of its students, so we wanted to explore how our students' environmental concern changed during their higher education experiences. We used the Revised New Ecological Paradigm Scale (NEP) with 505 students and developed and tested a multinomial…

  6. How School Choice Is Framed by Parental Preferences and Family Characteristics: A Study of Western Area, Sierra Leone

    ERIC Educational Resources Information Center

    Dixon, Pauline; Humble, Steve

    2017-01-01

    This research set out to investigate how, in a post-conflict area, parental preferences and household characteristics affect school choice for their children. A multinomial logit is used to model the relationship between education preferences and the selection of schools for 954 households in Freetown and neighboring districts, Western Area,…

  7. Redintegration and the benefits of long-term knowledge in verbal short-term memory: an evaluation of Schweickert's (1993) multinomial processing tree model.

    PubMed

    Thorn, Annabel S C; Gathercole, Susan E; Frankish, Clive R

    2005-03-01

    The impact of four long-term knowledge variables on serial recall accuracy was investigated. Serial recall was tested for high and low frequency words and high and low phonotactic frequency nonwords in 2 groups: monolingual English speakers and French-English bilinguals. For both groups the recall advantage for words over nonwords reflected more fully correct recalls with fewer recall attempts that consisted of fragments of the target memory items (one or two of the three target phonemes recalled correctly); completely incorrect recalls were equivalent for the 2 list types. However, word frequency (for both groups), nonword phonotactic frequency (for the monolingual group), and language familiarity all influenced the proportions of completely incorrect recalls that were made. These results are not consistent with the view that long-term knowledge influences on immediate recall accuracy can be exclusively attributed to a redintegration process of the type specified in multinomial processing tree model of immediate recall. The finding of a differential influence on completely incorrect recalls of these four long-term knowledge variables suggests instead that the beneficial effects of long-term knowledge on short-term recall accuracy are mediated by more than one mechanism.

  8. Enrollment Management in Medical School Admissions: A Novel Evidence-Based Approach at One Institution.

    PubMed

    Burkhardt, John C; DesJardins, Stephen L; Teener, Carol A; Gay, Steven E; Santen, Sally A

    2016-11-01

    In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). U-M and American Medical College Application Service data (2006-2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic "dashboard" was created for practical use. Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.

  9. Predicting The Type Of Pregnancy Using Flexible Discriminate Analysis And Artificial Neural Networks: A Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooman, A.; Mohammadzadeh, M

    Some medical and epidemiological surveys have been designed to predict a nominal response variable with several levels. With regard to the type of pregnancy there are four possible states: wanted, unwanted by wife, unwanted by husband and unwanted by couple. In this paper, we have predicted the type of pregnancy, as well as the factors influencing it using three different models and comparing them. Regarding the type of pregnancy with several levels, we developed a multinomial logistic regression, a neural network and a flexible discrimination based on the data and compared their results using tow statistical indices: Surface under curvemore » (ROC) and kappa coefficient. Based on these tow indices, flexible discrimination proved to be a better fit for prediction on data in comparison to other methods. When the relations among variables are complex, one can use flexible discrimination instead of multinomial logistic regression and neural network to predict the nominal response variables with several levels in order to gain more accurate predictions.« less

  10. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  11. Asymptotic Normality Through Factorial Cumulants and Partition Identities

    PubMed Central

    Bobecka, Konstancja; Hitczenko, Paweł; López-Blázquez, Fernando; Rempała, Grzegorz; Wesołowski, Jacek

    2013-01-01

    In the paper we develop an approach to asymptotic normality through factorial cumulants. Factorial cumulants arise in the same manner from factorial moments as do (ordinary) cumulants from (ordinary) moments. Another tool we exploit is a new identity for ‘moments’ of partitions of numbers. The general limiting result is then used to (re-)derive asymptotic normality for several models including classical discrete distributions, occupancy problems in some generalized allocation schemes and two models related to negative multinomial distribution. PMID:24591773

  12. Redintegration and the Benefits of Long-Term Knowledge in Verbal Short-Term Memory: An Evaluation of Schweickert's (1993) Multinomial Processing Tree Model

    ERIC Educational Resources Information Center

    Thorn, Annabel S. C.; Gathercole, Susan E.; Frankish, Clive R.

    2005-01-01

    The impact of four long-term knowledge variables on serial recall accuracy was investigated. Serial recall was tested for high and low frequency words and high and low phonotactic frequency nonwords in 2 groups: monolingual English speakers and French-English bilinguals. For both groups the recall advantage for words over nonwords reflected more…

  13. Predictive occurrence models for coastal wetland plant communities: delineating hydrologic response surfaces with multinomial logistic regression

    USGS Publications Warehouse

    Snedden, Gregg A.; Steyer, Gregory D.

    2013-01-01

    Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007–Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.

  14. Bayesian multimodel inference for dose-response studies

    USGS Publications Warehouse

    Link, W.A.; Albers, P.H.

    2007-01-01

    Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.

  15. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  16. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  17. Think twice before you book? Modelling the choice of public vs private dentist in a choice experiment.

    PubMed

    Kiiskinen, Urpo; Suominen-Taipale, Anna Liisa; Cairns, John

    2010-06-01

    This study concerns the choice of primary dental service provider by consumers. If the health service delivery system allows individuals to choose between public-care providers or if complementary private services are available, it is typically assumed that utilisation is a three-stage decision process. The patient first makes a decision to seek care, and then chooses the service provider. The final stage, involving decisions over the amount and form of treatment, is not considered here. The paper reports a discrete choice experiment (DCE) designed to evaluate attributes affecting individuals' choice of dental-care provider. The feasibility of the DCE approach in modelling consumers' choice in the context of non-acute need for dental care is assessed. The aim is to test whether a separate two-stage logit, a multinomial logit, or a nested logit best fits the choice process of consumers. A nested logit model of indirect utility functions is estimated and inclusive value (IV) constraints are tested for modelling implications. The results show that non-trading behaviour has an impact on the choice of appropriate modelling technique, but is to some extent dependent on the choice of scenarios offered. It is concluded that for traders multinomial logit is appropriate, whereas for non-traders and on average the nested logit is the method supported by the analyses. The consistent finding in all subgroup analyses is that the traditional two-stage decision process is found to be implausible in the context of consumer's choice of dental-care provider.

  18. Predictors of Place of Death for Seniors in Ontario: A Population-Based Cohort Analysis

    ERIC Educational Resources Information Center

    Motiwala, Sanober S.; Croxford, Ruth; Guerriere, Denise N.; Coyte, Peter C.

    2006-01-01

    Place of death was determined for all 58,689 seniors (age greater than or equal to 66 years) in Ontario who died during fiscal year 2001/2002. The relationship of place of death to medical and socio-demographic characteristics was examined using a multinomial logit model. Half (49.2 %) of these individuals died in hospital, 30.5 per cent died in a…

  19. Factors influencing spatial pattern in tropical forest clearance and stand age: Implications for carbon storage and species diversity.

    Treesearch

    E. H. Helmer; Thomas J. Brandeis; Ariel E. Lugo; Todd Kennaway

    2008-01-01

    Little is known about the tropical forests that undergo clearing as urban/built-up and other developed lands spread. This study uses remote sensing-based maps of Puerto Rico, multinomial logit models and forest inventory data to explain patterns of forest age and the age of forests cleared for land development and assess their implications for forest carbon storage and...

  20. Source and destination memory in face-to-face interaction: A multinomial modeling approach.

    PubMed

    Fischer, Nele M; Schult, Janette C; Steffens, Melanie C

    2015-06-01

    Arguing that people are often in doubt concerning to whom they have presented what information, Gopie and MacLeod (2009) introduced a new memory component, destination memory: remembering the destination of output information (i.e., "Who did you tell this to?"). They investigated source (i.e., "Who told you that?") versus destination memory in computer-based imagined interactions. The present study investigated destination memory in real interaction situations. In 2 experiments with mixed-gender (N = 53) versus same-gender (N = 89) groups, source and destination memory were manipulated by creating a setup similar to speed dating. In dyads, participants completed phrase fragments with personal information, taking turns. At recognition, participants decided whether fragments were new or old and, if old, whether they were listened to or spoken and which depicted person was the source or the destination of the information. A multinomial model was used for analyses. Source memory significantly exceeded destination memory, whereas information itself was better remembered in the destination than in the source condition. These findings corroborate the trade-off hypothesis: Context is better remembered in input than in output events, but information itself is better remembered in output than in input events. We discuss the implications of these findings for real-world conversation situations. (c) 2015 APA, all rights reserved).

  1. An Optimization-Based Framework for the Transformation of Incomplete Biological Knowledge into a Probabilistic Structure and Its Application to the Utilization of Gene/Protein Signaling Pathways in Discrete Phenotype Classification.

    PubMed

    Esfahani, Mohammad Shahrokh; Dougherty, Edward R

    2015-01-01

    Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.

  2. Identifying patterns of item missing survey data using latent groups: an observational study.

    PubMed

    Barnett, Adrian G; McElwee, Paul; Nathan, Andrea; Burton, Nicola W; Turrell, Gavin

    2017-10-30

    To examine whether respondents to a survey of health and physical activity and potential determinants could be grouped according to the questions they missed, known as 'item missing'. Observational study of longitudinal data. Residents of Brisbane, Australia. 6901 people aged 40-65 years in 2007. We used a latent class model with a mixture of multinomial distributions and chose the number of classes using the Bayesian information criterion. We used logistic regression to examine if participants' characteristics were associated with their modal latent class. We used logistic regression to examine whether the amount of item missing in a survey predicted wave missing in the following survey. Four per cent of participants missed almost one-fifth of the questions, and this group missed more questions in the middle of the survey. Eighty-three per cent of participants completed almost every question, but had a relatively high missing probability for a question on sleep time, a question which had an inconsistent presentation compared with the rest of the survey. Participants who completed almost every question were generally younger and more educated. Participants who completed more questions were less likely to miss the next longitudinal wave. Examining patterns in item missing data has improved our understanding of how missing data were generated and has informed future survey design to help reduce missing data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. The effect of binary mixtures of zinc, copper, cadmium, and nickel on the growth of the freshwater diatom Navicula pelliculosa and comparison with mixture toxicity model predictions.

    PubMed

    Nagai, Takashi; De Schamphelaere, Karel A C

    2016-11-01

    The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.

  4. Quality and price--impact on patient satisfaction.

    PubMed

    Pantouvakis, Angelos; Bouranta, Nancy

    2014-01-01

    The purpose of this paper is to synthesize existing quality-measurement models and applies them to healthcare by combining a Nordic service-quality with an American service performance model. Results are based on a questionnaire survey of 1,298 respondents. Service quality dimensions were derived and related to satisfaction by employing a multinomial logistic model, which allows prediction and service improvement. Qualitative and empirical evidence indicates that customer satisfaction and service quality are multi-dimensional constructs, whose quality components, together with convenience and cost, influence the customer's overall satisfaction. The proposed model identifies important quality and satisfaction issues. It also enables transitions between different responses in different studies to be compared.

  5. ABrox-A user-friendly Python module for approximate Bayesian computation with a focus on model comparison.

    PubMed

    Mertens, Ulf Kai; Voss, Andreas; Radev, Stefan

    2018-01-01

    We give an overview of the basic principles of approximate Bayesian computation (ABC), a class of stochastic methods that enable flexible and likelihood-free model comparison and parameter estimation. Our new open-source software called ABrox is used to illustrate ABC for model comparison on two prominent statistical tests, the two-sample t-test and the Levene-Test. We further highlight the flexibility of ABC compared to classical Bayesian hypothesis testing by computing an approximate Bayes factor for two multinomial processing tree models. Last but not least, throughout the paper, we introduce ABrox using the accompanied graphical user interface.

  6. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  7. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  8. Faà di Bruno's formula and the distributions of random partitions in population genetics and physics.

    PubMed

    Hoppe, Fred M

    2008-06-01

    We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.

  9. Early Childhood Precursors and School age Correlates of Different Internalising Problem Trajectories Among Young Children.

    PubMed

    Parkes, Alison; Sweeting, Helen; Wight, Daniel

    2016-10-01

    It is unclear why trajectories of internalising problems vary between groups of young children. This is the first attempt in the United Kingdom to identify and explain different trajectories of internalising problems from 46 to 94 months. Using both mother- and child-reported data from the large Growing Up in Scotland (GUS) birth cohort (N = 2901; male N = 1497, female N = 1404), we applied growth mixture modelling and multivariable multinomial regression models. Three trajectories were identified: low-stable, high-decreasing and medium-increasing. There were no gender differences in trajectory shape, membership, or importance of covariates. Children from both elevated trajectories shared several early risk factors (low income, poor maternal mental health, poor partner relationship, pre-school behaviour problems) and school-age covariates (low mother-child warmth and initial school maladjustment) and reported fewer supportive friendships at 94 months. However, there were also differences in covariates between the two elevated trajectories. Minority ethnic status and pre-school conduct problems were more strongly associated with the high-decreasing trajectory; and covariates measured after school entry (behaviour problems, mother-child conflict and school maladjustment) with the medium-increasing trajectory. This suggests a greater burden of early risk for the high-decreasing trajectory, and that children with moderate early problem levels were more vulnerable to influences after school transition. Our findings largely support the sparse existing international evidence and are strengthened by the use of child-reported data. They highlight the need to identify protective factors for children with moderate, as well as high, levels of internalising problems at pre-school age, but suggest different approaches may be required.

  10. Profiles of internalizing and externalizing symptoms associated with bullying victimization.

    PubMed

    Eastman, Meridith; Foshee, Vangie; Ennett, Susan; Sotres-Alvarez, Daniela; Reyes, H Luz McNaughton; Faris, Robert; North, Kari

    2018-06-01

    This study identified profiles of internalizing (anxiety and depression) and externalizing (delinquency and violence against peers) symptoms among bullying victims and examined associations between bullying victimization characteristics and profile membership. The sample consisted of 1196 bullying victims in grades 8-10 (M age  = 14.4, SD = 1.01) who participated in The Context Study in three North Carolina counties in Fall 2003. Five profiles were identified using latent profile analysis: an asymptomatic profile and four profiles capturing combinations of internalizing and externalizing symptoms. Associations between bullying characteristics and membership in symptom profiles were tested using multinomial logistic regression. More frequent victimization increased odds of membership in the two high internalizing profiles compared to the asymptomatic profile. Across all multinomial logistic regression models, when the high internalizing, high externalizing profile was the reference category, adolescents who received any type of bullying (direct, indirect, or dual) were more likely to be in this category than any others. Copyright © 2018 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  11. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.

  13. Economic analysis of the potential impact of climate change on recreational trout fishing in the Southern Appalachian Mountains: An appication of a nested multinomial logti model

    Treesearch

    Soeun Ahn; Joseph E. de Steiguer; Raymond B. Palmquist; Thomas P. Holmes

    2000-01-01

    Global warming due to the enhanced greenhouse effect through human activities has become a major public policy issue in recent years. The present study focuses on the potential economic impact of climate change on recreational trout fishing in the Southern Appalachian Mountains of North Carolina. Significant reductions in trout habitat and/or populations are...

  14. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    PubMed

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Dental health services utilization and associated factors in children 6 to 12 years old in a low-income country.

    PubMed

    Medina-Solis, Carlo Eduardo; Maupomé, Gerardo; del Socorro, Herrera Miriam; Pérez-Núñez, Ricardo; Avila-Burgos, Leticia; Lamadrid-Figueroa, Hector

    2008-01-01

    To determine the factors associated with the dental health services utilization among children ages 6 to 12 in León, Nicaragua. A cross-sectional study was carried out in 1,400 schoolchildren. Using a questionnaire, we determined information related to utilization and independent variables in the previous year. Oral health needs were established by means of a dental examination. To identify the independent variables associated with dental health services utilization, two types of multivariate regression models were used, according to the measurement scale of the outcome variable: a) frequency of utilization as (0) none, (1) one, and (2) two or more, analyzed with the ordered logistic regression and b) the type of service utilized as (0) none, (1) preventive services, (2) curative services, and (3) both services, analyzed with the multinomial logistic regression. The proportion of children who received at least one dental service in the 12 months prior to the study was 27.7 percent. The variables associated with utilization in the two models were older age, female sex, more frequent toothbrushing, positive attitude of the mother toward the child's oral health, higher socioeconomic level, and higher oral health needs. Various predisposing, enabling, and oral health needs variables were associated with higher dental health services utilization. As in prior reports elsewhere, these results from Nicaragua confirmed that utilization inequalities exist between socioeconomic groups. The multinomial logistic regression model evidenced the association of different variables depending on the type of service used.

  16. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906

  17. Occupational outcomes of adult childhood cancer survivors: A report from the Childhood Cancer Survivor Study

    PubMed Central

    Kirchhoff, Anne C.; Krull, Kevin R.; Ness, Kirsten K.; Park, Elyse R.; Oeffinger, Kevin C.; Hudson, Melissa M.; Stovall, Marilyn; Robison, Leslie L.; Wickizer, Thomas; Leisenring, Wendy

    2010-01-01

    Background We examined whether survivors from the Childhood Cancer Survivor Study were less likely to be in higher skill occupations than a sibling comparison and whether certain survivors were at higher risk. Methods We created three mutually-exclusive occupational categories for participants aged ≥25 years: Managerial/Professional and Non-Physical and Physical Service/Blue Collar. We examined currently employed survivors (N=4845) and siblings (N=1727) in multivariable generalized linear models to evaluate the likelihood of being in the three occupational categories. Among all participants, we used multinomial logistic regression to examine the likelihood of these outcomes in comparison to being unemployed (survivors N=6671; siblings N=2129). Multivariable linear models were used to assess survivor occupational differences by cancer and treatment variables. Personal income was compared by occupation. Results Employed survivors were less often in higher skilled Managerial/Professional occupations (Relative Risk=0.93, 95% Confidence Interval 0.89–0.98) than siblings. Survivors who were Black, were diagnosed at a younger age, or had high-dose cranial radiation were less likely to hold Professional occupations than other survivors. In multinomial models, female survivors’ likelihood of being in full-time Professional occupations (27%) was lower than male survivors (42%) and female (41%) and male (50%) siblings. Survivors’ personal income was lower than siblings within each of the three occupational categories in models adjusted for sociodemographic variables. Conclusions Adult childhood cancer survivors are employed in lower skill jobs than siblings. Survivors with certain treatment histories are at higher risk and may require vocational assistance throughout adulthood. PMID:21246530

  18. Detecting Mixtures from Structural Model Differences Using Latent Variable Mixture Modeling: A Comparison of Relative Model Fit Statistics

    ERIC Educational Resources Information Center

    Henson, James M.; Reise, Steven P.; Kim, Kevin H.

    2007-01-01

    The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…

  19. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  20. Alcohol use among university students: Considering a positive deviance approach.

    PubMed

    Tucker, Maryanne; Harris, Gregory E

    2016-09-01

    Harmful alcohol consumption among university students continues to be a significant issue. This study examined whether variables identified in the positive deviance literature would predict responsible alcohol consumption among university students. Surveyed students were categorized into three groups: abstainers, responsible drinkers and binge drinkers. Multinomial logistic regression modelling was significant (χ(2) = 274.49, degrees of freedom = 24, p < .001), with several variables predicting group membership. While the model classification accuracy rate (i.e. 71.2%) exceeded the proportional by chance accuracy rate (i.e. 38.4%), providing further support for the model, the model itself best predicted binge drinker membership over the other two groups. © The Author(s) 2015.

  1. Effects of ignoring baseline on modeling transitions from intact cognition to dementia.

    PubMed

    Yu, Lei; Tyas, Suzanne L; Snowdon, David A; Kryscio, Richard J

    2009-07-01

    This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-epsilon4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun.

  2. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  3. Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.

    PubMed

    Nagai, Takashi

    2017-10-01

    The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.

  4. Cognitive overload? An exploration of the potential impact of cognitive functioning in discrete choice experiments with older people in health care.

    PubMed

    Milte, Rachel; Ratcliffe, Julie; Chen, Gang; Lancsar, Emily; Miller, Michelle; Crotty, Maria

    2014-07-01

    This exploratory study sought to investigate the effect of cognitive functioning on the consistency of individual responses to a discrete choice experiment (DCE) study conducted exclusively with older people. A DCE to investigate preferences for multidisciplinary rehabilitation was administered to a consenting sample of older patients (aged 65 years and older) after surgery to repair a fractured hip (N = 84). Conditional logit, mixed logit, heteroscedastic conditional logit, and generalized multinomial logit regression models were used to analyze the DCE data and to explore the relationship between the level of cognitive functioning (specifically the absence or presence of mild cognitive impairment as assessed by the Mini-Mental State Examination) and preference and scale heterogeneity. Both the heteroscedastic conditional logit and generalized multinomial logit models indicated that the presence of mild cognitive impairment did not have a significant effect on the consistency of responses to the DCE. This study provides important preliminary evidence relating to the effect of mild cognitive impairment on DCE responses for older people. It is important that further research be conducted in larger samples and more diverse populations to further substantiate the findings from this exploratory study and to assess the practicality and validity of the DCE approach with populations of older people. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. A constrained rasch model of trace redintegration in serial recall.

    PubMed

    Roodenrys, Steven; Miller, Leonie M

    2008-04-01

    The notion that verbal short-term memory tasks, such as serial recall, make use of information in long-term as well as in short-term memory is instantiated in many models of these tasks. Such models incorporate a process in which degraded traces retrieved from a short-term store are reconstructed, or redintegrated (Schweickert, 1993), through the use of information in long-term memory. This article presents a conceptual and mathematical model of this process based on a class of item-response theory models. It is demonstrated that this model provides a better fit to three sets of data than does the multinomial processing tree model of redintegration (Schweickert, 1993) and that a number of conceptual accounts of serial recall can be related to the parameters of the model.

  6. Development and assessment of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-16).

    PubMed

    Goodwin, Shane W; Ferro, Mark A; Speechley, Kathy N

    2018-03-01

    The aim of this study was to develop and validate a brief version of the Quality of Life in Childhood Epilepsy Questionnaire (QOLCE). A secondary aim was to compare the results described in previously published studies using the QOLCE-55 with those obtained using the new brief version. Data come from 373 children involved in the Health-related Quality of Life in Children with Epilepsy Study, a multicenter prospective cohort study. Item response theory (IRT) methods were used to assess dimensionality and item properties and to guide the selection of items. Replication of results using the brief measure was conducted with multiple regression, multinomial regression, and latent mixture modeling techniques. IRT methods identified a bi-factor graded response model that best fits the data. Thirty-nine items were removed, resulting in a 16-item QOLCE (QOLCE-16) with an equal number of items in all 4 domains of functioning (Cognitive, Emotional, Social, and Physical). Model fit was excellent: Comparative Fit Index = 0.99; Tucker-Lewis Index = 0.99; root mean square error of approximation = 0.052 (90% confidence interval [CI] 0.041-0.064); weighted root mean square = 0.76. Results that were reported previously using the QOLCE-55 and QOLCE-76 were comparable to those generated using the QOLCE-16. The QOLCE-16 is a multidimensional measure of health-related quality of life (HRQoL) with good psychometric properties and a short-estimated completion time. It is notable that the items were calibrated using multidimensional IRT methods to create a measure that conforms to conventional definitions of HRQoL. The QOLCE-16 is an appropriate measure for both clinicians and researchers wanting to record HRQoL information in children with epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  7. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  8. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  9. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  10. A nonparametric multiple imputation approach for missing categorical data.

    PubMed

    Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh

    2017-06-06

    Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.

  11. Accounting for non-independent detection when estimating abundance of organisms with a Bayesian approach

    USGS Publications Warehouse

    Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth

    2011-01-01

    Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.

  12. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  13. A computer program for estimation from incomplete multinomial data

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1978-01-01

    Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.

  14. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  15. Dynamics and associations of microbial community types across the human body.

    PubMed

    Ding, Tao; Schloss, Patrick D

    2014-05-15

    A primary goal of the Human Microbiome Project (HMP) was to provide a reference collection of 16S ribosomal RNA gene sequences collected from sites across the human body that would allow microbiologists to better associate changes in the microbiome with changes in health. The HMP Consortium has reported the structure and function of the human microbiome in 300 healthy adults at 18 body sites from a single time point. Using additional data collected over the course of 12-18 months, we used Dirichlet multinomial mixture models to partition the data into community types for each body site and made three important observations. First, there were strong associations between whether individuals had been breastfed as an infant, their gender, and their level of education with their community types at several body sites. Second, although the specific taxonomic compositions of the oral and gut microbiomes were different, the community types observed at these sites were predictive of each other. Finally, over the course of the sampling period, the community types from sites within the oral cavity were the least stable, whereas those in the vagina and gut were the most stable. Our results demonstrate that even with the considerable intra- and interpersonal variation in the human microbiome, this variation can be partitioned into community types that are predictive of each other and are probably the result of life-history characteristics. Understanding the diversity of community types and the mechanisms that result in an individual having a particular type or changing types, will allow us to use their community types to assess disease risk and to personalize therapies.

  16. Exposure to polycyclic aromatic hydrocarbons (PAHs) and bladder cancer: evaluation from a gene-environment perspective in a hospital-based case-control study in the Canary Islands (Spain)

    PubMed Central

    Boada, Luis D; Henríquez-Hernández, Luis A; Navarro, Patricio; Zumbado, Manuel; Almeida-González, Maira; Camacho, María; Álvarez-León, Eva E; Valencia-Santana, Jorge A; Luzardo, Octavio P

    2015-01-01

    Background: Exposure to polycyclic aromatic hydrocarbons (PAHs) has been linked to bladder cancer. Objective: To evaluate the role of PAHs in bladder cancer, PAHs serum levels were measured in patients and controls from a case-control study. Methods: A total of 140 bladder cancer patients and 206 healthy controls were included in the study. Sixteen PAHs were analyzed from the serum of subjects by gas chromatography–mass spectrometry. Results: Serum PAHs did not appear to be related to bladder cancer risk, although the profile of contamination by PAHs was different between patients and controls: pyrene (Pyr) was solely detected in controls and chrysene (Chry) was exclusively detected in the cases. Phenanthrene (Phe) serum levels were inversely associated with bladder cancer (OR = 0·79, 95%CI = 0·64–0·99, P = 0·030), although this effect disappeared when the allelic distribution of glutathione-S-transferase polymorphisms of the population was introduced into the model (multinomial logistic regression test, P = 0·933). Smoking (OR = 3·62, 95%CI = 1·93–6·79, P<0·0001) and coffee consumption (OR = 1·73, 95%CI = 1·04–2·86, P = 0·033) were relevant risk factors for bladder cancer. Conclusions: Specific PAH mixtures may play a relevant role in bladder cancer, although such effect seems to be highly modulated by polymorphisms in genes encoding xenobiotic-metabolizing enzymes. PMID:25291984

  17. Trajectories of suicidal ideation over 6 months among 482 outpatients with bipolar disorder.

    PubMed

    Köhler-Forsberg, Ole; Madsen, Trine; Behrendt-Møller, Ida; Sylvia, Louisa; Bowden, Charles L; Gao, Keming; Bobo, William V; Trivedi, Madhukar H; Calabrese, Joseph R; Thase, Michael; Shelton, Richard C; McInnis, Melvin; Tohen, Mauricio; Ketter, Terence A; Friedman, Edward S; Deckersbach, Thilo; McElroy, Susan L; Reilly-Harrington, Noreen A; Nierenberg, Andrew A

    2017-12-01

    Suicidal ideation occurs frequently among individuals with bipolar disorder; however, its course and persistence over time remains unclear. We aimed to investigate 6-months trajectories of suicidal ideation among adults with bipolar disorder. The Bipolar CHOICE study randomized 482 outpatients with bipolar disorder to 6 months of lithium- or quetiapine-based treatment including other psychotropic medications as clinically indicated. Participants were asked at 9 visits about suicidal ideation using the Concise Health Risk Tracking scale. We performed latent Growth Mixture Modelling analysis to empirically identify trajectories of suicidal ideation. Multinomial logistic regression analyses were applied to estimate associations between trajectories and potential predictors. We identified four distinct trajectories. The Moderate-Stable group represented 11.1% and was characterized by constant suicidal ideation. The Moderate-Unstable group included 2.9% with persistent thoughts about suicide with a more fluctuating course. The third (Persistent-low, 20.8%) and fourth group (Persistent-very-low, 65.1%) were characterized by low levels of suicidal ideation. Higher depression scores and previous suicide attempts (non-significant trend) predicted membership of the Moderate-Stable group, whereas randomized treatment did not. No specific treatments against suicidal ideation were included and suicidal thoughts may persist for several years. More than one in ten adult outpatients with bipolar disorder had moderately increased suicidal ideation throughout 6 months of pharmacotherapy. The identified predictors may help clinicians to identify those with additional need for treatment against suicidal thoughts and future studies need to investigate whether targeted treatment (pharmacological and non-pharmacological) may improve the course of persistent suicidal ideation. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Evaluating the Relationship between Productivity and Quality in Emergency Departments

    PubMed Central

    Bastian, Nathaniel D.; Riordan, John P.

    2017-01-01

    Background In the United States, emergency departments (EDs) are constantly pressured to improve operational efficiency and quality in order to gain financial benefits and maintain a positive reputation. Objectives The first objective is to evaluate how efficiently EDs transform their input resources into quality outputs. The second objective is to investigate the relationship between the efficiency and quality performance of EDs and the factors affecting this relationship. Methods Using two data sources, we develop a data envelopment analysis (DEA) model to evaluate the relative efficiency of EDs. Based on the DEA result, we performed multinomial logistic regression to investigate the relationship between ED efficiency and quality performance. Results The DEA results indicated that the main source of inefficiencies was working hours of technicians. The multinomial logistic regression result indicated that the number of electrocardiograms and X-ray procedures conducted in the ED and the length of stay were significantly associated with the trade-offs between relative efficiency and quality. Structural ED characteristics did not influence the relationship between efficiency and quality. Conclusions Depending on the structural and operational characteristics of EDs, different factors can affect the relationship between efficiency and quality. PMID:29065673

  19. Occupational outcomes of adult childhood cancer survivors: A report from the childhood cancer survivor study.

    PubMed

    Kirchhoff, Anne C; Krull, Kevin R; Ness, Kirsten K; Park, Elyse R; Oeffinger, Kevin C; Hudson, Melissa M; Stovall, Marilyn; Robison, Leslie L; Wickizer, Thomas; Leisenring, Wendy

    2011-07-01

    The authors examined whether survivors from the Childhood Cancer Survivor Study were less likely to be in higher-skill occupations than a sibling comparison and whether certain survivors were at higher risk for lower-skill jobs. The authors created 3 mutually exclusive occupational categories for participants aged ≥ 25 years: Managerial/Professional, Nonphysical Service/Blue Collar, and Physical Service/Blue Collar. The authors examined currently employed survivors (4845) and their siblings (1727) in multivariable generalized linear models to evaluate the likelihood of being in 1 of the 3 occupational categories. Multinomial logistic regression was used among all participants to examine the likelihood of these outcomes compared to being unemployed (survivors, 6671; siblings, 2129). Multivariable linear models were used to assess survivor occupational differences by cancer-  and treatment-related variables. Personal income was compared by occupation. Employed survivors were less often in higher-skilled Managerial/Professional occupations (relative risk, 0.93; 95% confidence interval 0.89-0.98) than their siblings. Survivors who were black, were diagnosed at a younger age, or had high-dose cranial radiation were less likely to hold Managerial/Professional occupations than other survivors. In multinomial models, female survivors' likelihood of being in full-time Managerial/Professional occupations (27%) was lower than male survivors (42%) and female (41%) and male (50%) siblings. Survivors' personal income was lower than siblings within each of the 3 occupational categories in models adjusted for sociodemographic variables. Adult childhood cancer survivors are employed in lower-skill jobs than siblings. Survivors with certain treatment histories are at higher risk for lower-skill jobs and may require vocational assistance throughout adulthood. Copyright © 2011 American Cancer Society.

  20. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  1. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  2. Leveling up the analysis of the reminiscence bump in autobiographical memory: A new approach based on multilevel multinomial models.

    PubMed

    Zimprich, Daniel; Wolf, Tabea

    2018-06-20

    In many studies of autobiographical memory, participants are asked to generate more than one autobiographical memory. The resulting data then have a hierarchical or multilevel structure, in the sense that the autobiographical memories (Level 1) generated by the same person (Level 2) tend to be more similar. Transferred to an analysis of the reminiscence bump in autobiographical memory, at Level 1 the prediction of whether an autobiographical memory will fall within the reminiscence bump is based on the characteristics of that memory. At Level 2, the prediction of whether an individual will report more autobiographical memories that fall in the reminiscence bump is based on the characteristics of the individual. We suggest a multilevel multinomial model that allows for analyzing whether an autobiographical memory falls in the reminiscence bump at both levels of analysis simultaneously. The data come from 100 older participants who reported up to 33 autobiographical memories. Our results showed that about 12% of the total variance was between persons (Level 2). Moreover, at Level 1, memories of first-time experiences were more likely to fall in the reminiscence bump than were emotionally more positive memories. At Level 2, persons who reported more emotionally positive memories tended to report fewer memories from the life period after the reminiscence bump. In addition, cross-level interactions showed that the effects at Level 1 partly depended on the Level 2 effects. We discuss possible extensions of the model we present and the meaning of our findings for two prominent explanatory approaches to the reminiscence bump, as well as future directions.

  3. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  4. Modeling and analysis of personal exposures to VOC mixtures using copulas

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2014-01-01

    Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991

  5. Pulse pileup statistics for energy discriminating photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir

    Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less

  6. An agent-based model for queue formation of powered two-wheelers in heterogeneous traffic

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Chang; Wong, K. I.

    2016-11-01

    This paper presents an agent-based model (ABM) for simulating the queue formation of powered two-wheelers (PTWs) in heterogeneous traffic at a signalized intersection. The main novelty is that the proposed interaction rule describing the position choice behavior of PTWs when queuing in heterogeneous traffic can capture the stochastic nature of the decision making process. The interaction rule is formulated as a multinomial logit model, which is calibrated by using a microscopic traffic trajectory dataset obtained from video footage. The ABM is validated against the survey data for the vehicular trajectory patterns, queuing patterns, queue lengths, and discharge rates. The results demonstrate that the proposed model is capable of replicating the observed queue formation process for heterogeneous traffic.

  7. Effects of ignoring baseline on modeling transitions from intact cognition to dementia

    PubMed Central

    Yu, Lei; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.

    2009-01-01

    This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-ε4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun. PMID:20161282

  8. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  9. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  10. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    PubMed

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    NASA Astrophysics Data System (ADS)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  12. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  14. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  15. Development and validation of a metal mixture bioavailability model (MMBM) to predict chronic toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia.

    PubMed

    Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C

    2017-01-01

    Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix  = 16; RMSE Zn only  = 18; RMSE Ni only  = 17; RMSE Pb only  = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Rasch Mixture Models for DIF Detection

    PubMed Central

    Strobl, Carolin; Zeileis, Achim

    2014-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819

  17. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  18. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  19. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  20. Effects of public premiums on children's health insurance coverage: evidence from 1999 to 2003.

    PubMed

    Kenney, Genevieve; Hadley, Jack; Blavin, Fredric

    This study uses 2000 to 2004 Current Population Survey data to examine the effects of public premiums on the insurance coverage of children whose family incomes are between 100% and 300% of the federal poverty level. The analysis employs multinomial logistic models that control for factors other than premium costs. While the magnitude of the estimated effects varies across models, the results consistently indicate that raising public premiums reduces enrollment in public programs, with some children who forgo public coverage having private coverage instead and others being uninsured. The results indicate that public premiums have larger effects when applied to lower-income families.

  1. Can integrated health services delivery have an impact on hypertension management? A cross-sectional study in two cities of China.

    PubMed

    Li, Haitao; Sun, Ying; Qian, Dongfu

    2016-11-30

    Policy makers require information regarding performance of different primary care delivery models in managing hypertension, which can be helpful for better hypertension management. This study aims to compare continuity of care among hypertensive patients between Direct Management (DM) Model of community health centers (CHCs) in Wuhan and Loose Collaboration (LC) Model in Nanjing. A cross-sectional questionnaire survey was conducted. Four CHCs in each city were randomly selected as study settings. 386 patients in Nanjing and 396 in Wuhan completed face-to-face interview surveys and were included in the final analysis. The relational continuity and coordination continuity (including both information continuity and management continuity) were measured and analyzed. Binary or multinomial logistic regression models were used for comparison between the two cities. Participants from Nanjing had better relational continuity with primary care providers as compared with those from Wuhan, including more likely to be familiar with a CHC physician (OR = 2.762; 95%CI: 1.878 to 4.061), taken care of by the same CHC physician (OR = 1.846; 95%CI: 1.262 to 2.700), and known well by a CHC physician (OR = 1.762; 95%CI: 1.206 to 2.572). Multinomial logistic regression analyses showed there were significant differences between the two cities in reported frequency of communications between hospital and CHC physicians (P = 0.001), whether hospital and CHC physicians gave same treatment suggestions (P = 0.016), as well as how treatment strategy was formulated (P < 0.001). Participants in Wuhan were less likely than those in Nanjing to consider there was continuum regarding health services provided by hospital and CHC physicians (OR = 3.932; 95%CI: 2.394 to 6.459). Our study shows that continuity of care is better for LC Model in Nanjing than DM Model in Wuhan. Our study suggests there is room for improvement regarding relational and information continuity in both cities.

  2. Local Solutions in the Estimation of Growth Mixture Models

    ERIC Educational Resources Information Center

    Hipp, John R.; Bauer, Daniel J.

    2006-01-01

    Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…

  3. Association of personality, neighbourhood, and civic participation with the level of perceived social support: the HUNT study, a cross-sectional survey.

    PubMed

    Grav, Siv; Romild, Ulla; Hellzèn, Ove; Stordal, Eystein

    2013-08-01

    The aim of the current study was to examine the association of personality, neighbourhood, and civic participation with the level of perceived social support if needed. The sample consists of a total of 35,797 men (16,035) and women (19,762) drawn from the Nord-Trøndelag Health Study 3 (HUNT3), aged 20-89, with a fully completed short version of the Eysenck Personality Questionnaire (EPQ) including a complete response to questions regarding perceived social support. A multinomial logistic regression model was used to investigate the association between the three-category outcomes (high, medium, and low) of perceived social support. The Chi-square test detected a significant (p < 0.001) association between personality, sense of community, civic participation, self-rated health, living arrangement, age groups, gender, and perceived social support, except between perceived social support and loss of social network, in which no significance was found. The crude and adjusted multinomial logistic regression models show a relation between medium and low scores on perceived social support, personality, and sources of social support. Interactions were observed between gender and self-rated health. There is an association between the level of perceived social support and personality, sense of community in the neighbourhood, and civic participation. Even if the interaction between men and self-reported health decreases the odds for low and medium social support, health professionals should be aware of men with poor health and their lack of social support.

  4. Atopic dermatitis is not associated with actinic keratosis: cross-sectional results from the Rotterdam study.

    PubMed

    Hajdarbegovic, E; Blom, H; Verkouteren, J A C; Hofman, A; Hollestein, L M; Nijsten, T

    2016-07-01

    Epidermal barrier impairment and an altered immune system in atopic dermatitis (AD) may predispose to ultraviolet-induced DNA damage. To study the association between AD and actinic keratosis (AK) in a population-based cross-sectional study. AD was defined by modified criteria of the U.K. working party's diagnostic criteria. AKs were diagnosed by physicians during a full-body skin examination, and keratinocyte cancers were identified via linkage to the national pathology database. The results were analysed in adjusted multivariable and multinomial models. A lower proportion of subjects with AD had AKs than those without AD: 16% vs. 24%, P = 0·002; unadjusted odds ratio (OR) 0·60, 95% confidence interval (CI) 0·42-0·83; adjusted OR 0·74, 95% CI 0·51-1·05; fully adjusted OR 0·69, 95% CI 0·47-1·07. In a multinomial model patients with AD were less likely to have ≥ 10 AKs (adjusted OR 0·28, 95% CI 0·09-0·90). No effect of AD on basal cell carcinoma or squamous cell carcinoma was found: adjusted OR 0·71, 95% CI 0·41-1·24 and adjusted OR 1·54, 95% CI 0·66-3·62, respectively. AD in community-dwelling patients is not associated with AK. © 2016 British Association of Dermatologists.

  5. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.

    PubMed

    Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten

    2017-10-01

    Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.

  6. Cluster kinetics model for mixtures of glassformers

    NASA Astrophysics Data System (ADS)

    Brenskelle, Lisa A.; McCoy, Benjamin J.

    2007-10-01

    For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.

  7. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing.

    PubMed

    Leong, Siow Hoo; Ong, Seng Huat

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.

  8. Similarity measure and domain adaptation in multiple mixture model clustering: An application to image processing

    PubMed Central

    Leong, Siow Hoo

    2017-01-01

    This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634

  9. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  10. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    ERIC Educational Resources Information Center

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  11. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  12. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  13. Development of PBPK Models for Gasoline in Adult and ...

    EPA Pesticide Factsheets

    Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of

  14. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  15. Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures

    NASA Astrophysics Data System (ADS)

    Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.

    2017-10-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.

  16. Fitting a Mixture Item Response Theory Model to Personality Questionnaire Data: Characterizing Latent Classes and Investigating Possibilities for Improving Prediction

    ERIC Educational Resources Information Center

    Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk

    2008-01-01

    Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…

  17. Construction of moment-matching multinomial lattices using Vandermonde matrices and Gröbner bases

    NASA Astrophysics Data System (ADS)

    Lundengârd, Karl; Ogutu, Carolyne; Silvestrov, Sergei; Ni, Ying; Weke, Patrick

    2017-01-01

    In order to describe and analyze the quantitative behavior of stochastic processes, such as the process followed by a financial asset, various discretization methods are used. One such set of methods are lattice models where a time interval is divided into equal time steps and the rate of change for the process is restricted to a particular set of values in each time step. The well-known binomial- and trinomial models are the most commonly used in applications, although several kinds of higher order models have also been examined. Here we will examine various ways of designing higher order lattice schemes with different node placements in order to guarantee moment-matching with the process.

  18. A Study of Commuters’ Decision-Making When Delaying Departure for Work-Home Trips

    NASA Astrophysics Data System (ADS)

    Que, Fangjie; Wang, Wei

    2017-12-01

    Studies on the travel behaviors and patterns of residents are important to the arrangement of urban layouts and urban traffic planning. However, research on the characteristics of the decision-making behavior regarding departure time is not fully expanded yet. In this paper, the research focuses on commuters’ decision-making behavior regarding departure delay. According to the 2013 travel survey data of Suzhou City, a nested logit (NL) model was built to represent the probabilities of individual choices. Parameter calibration was conducted, so that the significant factors influencing the departure delay were obtained. Ultimately, the results of the NL model indicated that it performed better and with higher precision, compared to the traditional multinomial logit (MNL) model.

  19. Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis

    DTIC Science & Technology

    2005-07-25

    analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for

  20. Microstructure and hydrogen bonding in water-acetonitrile mixtures.

    PubMed

    Mountain, Raymond D

    2010-12-16

    The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.

  1. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    PubMed

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  2. Utility of an Abbreviated Dizziness Questionnaire to Differentiate between Causes of Vertigo and Guide Appropriate Referral: A Multicenter Prospective Blinded Study

    PubMed Central

    Roland, Lauren T.; Kallogjeri, Dorina; Sinks, Belinda C.; Rauch, Steven D.; Shepard, Neil T.; White, Judith A.; Goebel, Joel A.

    2015-01-01

    Objective Test performance of a focused dizziness questionnaire’s ability to discriminate between peripheral and non-peripheral causes of vertigo. Study Design Prospective multi-center Setting Four academic centers with experienced balance specialists Patients New dizzy patients Interventions A 32-question survey was given to participants. Balance specialists were blinded and a diagnosis was established for all participating patients within 6 months. Main outcomes Multinomial logistic regression was used to evaluate questionnaire performance in predicting final diagnosis and differentiating between peripheral and non-peripheral vertigo. Univariate and multivariable stepwise logistic regression were used to identify questions as significant predictors of the ultimate diagnosis. C-index was used to evaluate performance and discriminative power of the multivariable models. Results 437 patients participated in the study. Eight participants without confirmed diagnoses were excluded and 429 were included in the analysis. Multinomial regression revealed that the model had good overall predictive accuracy of 78.5% for the final diagnosis and 75.5% for differentiating between peripheral and non-peripheral vertigo. Univariate logistic regression identified significant predictors of three main categories of vertigo: peripheral, central and other. Predictors were entered into forward stepwise multivariable logistic regression. The discriminative power of the final models for peripheral, central and other causes were considered good as measured by c-indices of 0.75, 0.7 and 0.78, respectively. Conclusions This multicenter study demonstrates a focused dizziness questionnaire can accurately predict diagnosis for patients with chronic/relapsing dizziness referred to outpatient clinics. Additionally, this survey has significant capability to differentiate peripheral from non-peripheral causes of vertigo and may, in the future, serve as a screening tool for specialty referral. Clinical utility of this questionnaire to guide specialty referral is discussed. PMID:26485598

  3. Social and Demographic Factors Associated with Morbidities in Young Children in Egypt: A Bayesian Geo-Additive Semi-Parametric Multinomial Model.

    PubMed

    Khatab, Khaled; Adegboye, Oyelola; Mohammed, Taofeeq Ibn

    2016-01-01

    Globally, the burden of mortality in children, especially in poor developing countries, is alarming and has precipitated concern and calls for concerted efforts in combating such health problems. Examples of diseases that contribute to this burden of mortality include diarrhoea, cough, fever, and the overlap between these illnesses, causing childhood morbidity and mortality. To gain insight into these health issues, we employed the 2008 Demographic and Health Survey Data of Egypt, which recorded details from 10,872 children under five. This data focused on the demographic and socio-economic characteristics of household members. We applied a Bayesian multinomial model to assess the area-specific spatial effects and risk factors of co-morbidity of fever, diarrhoea and cough for children under the age of five. The results showed that children under 20 months of age were more likely to have the three diseases (OR: 6.8; 95% CI: 4.6-10.2) than children between 20 and 40 months (OR: 2.14; 95% CI: 1.38-3.3). In multivariate Bayesian geo-additive models, the children of mothers who were over 20 years of age were more likely to have only cough (OR: 1.2; 95% CI: 0.9-1.5) and only fever (OR: 1.2; 95% CI: 0.91-1.51) compared with their counterparts. Spatial results showed that the North-eastern region of Egypt has a higher incidence than most of other regions. This study showed geographic patterns of Egyptian governorates in the combined prevalence of morbidity among Egyptian children. It is obvious that the Nile Delta, Upper Egypt, and south-eastern Egypt have high rates of diseases and are more affected. Therefore, more attention is needed in these areas.

  4. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach

    PubMed Central

    Zhang, Hongxia; Tang, Weihai; Liu, Xiping

    2017-01-01

    Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM). This study investigated how EBPM performance is affected by task duration by having university students (n = 223) perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM) cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1) Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2) As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3) The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance. PMID:29163277

  5. Utility of an Abbreviated Dizziness Questionnaire to Differentiate Between Causes of Vertigo and Guide Appropriate Referral: A Multicenter Prospective Blinded Study.

    PubMed

    Roland, Lauren T; Kallogjeri, Dorina; Sinks, Belinda C; Rauch, Steven D; Shepard, Neil T; White, Judith A; Goebel, Joel A

    2015-12-01

    Test performance of a focused dizziness questionnaire's ability to discriminate between peripheral and nonperipheral causes of vertigo. Prospective multicenter. Four academic centers with experienced balance specialists. New dizzy patients. A 32-question survey was given to participants. Balance specialists were blinded and a diagnosis was established for all participating patients within 6 months. Multinomial logistic regression was used to evaluate questionnaire performance in predicting final diagnosis and differentiating between peripheral and nonperipheral vertigo. Univariate and multivariable stepwise logistic regression were used to identify questions as significant predictors of the ultimate diagnosis. C-index was used to evaluate performance and discriminative power of the multivariable models. In total, 437 patients participated in the study. Eight participants without confirmed diagnoses were excluded and 429 were included in the analysis. Multinomial regression revealed that the model had good overall predictive accuracy of 78.5% for the final diagnosis and 75.5% for differentiating between peripheral and nonperipheral vertigo. Univariate logistic regression identified significant predictors of three main categories of vertigo: peripheral, central, and other. Predictors were entered into forward stepwise multivariable logistic regression. The discriminative power of the final models for peripheral, central, and other causes was considered good as measured by c-indices of 0.75, 0.7, and 0.78, respectively. This multicenter study demonstrates a focused dizziness questionnaire can accurately predict diagnosis for patients with chronic/relapsing dizziness referred to outpatient clinics. Additionally, this survey has significant capability to differentiate peripheral from nonperipheral causes of vertigo and may, in the future, serve as a screening tool for specialty referral. Clinical utility of this questionnaire to guide specialty referral is discussed.

  6. Applications of the Simple Multi-Fluid Model to Correlations of the Vapor-Liquid Equilibrium of Refrigerant Mixtures Containing Carbon Dioxide

    NASA Astrophysics Data System (ADS)

    Akasaka, Ryo

    This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.

  7. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  8. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    PubMed

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors

    ERIC Educational Resources Information Center

    Guerra-Peña, Kiero; Steinley, Douglas

    2016-01-01

    Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…

  10. The outcome of tuberculosis treatment in subjects with chronic kidney disease in Brazil: a multinomial analysis*

    PubMed Central

    Reis-Santos, Barbara; Gomes, Teresa; Horta, Bernardo Lessa; Maciel, Ethel Leonor Noia

    2013-01-01

    OBJECTIVE: To analyze the association between clinical/epidemiological characteristics and outcomes of tuberculosis treatment in patients with concomitant tuberculosis and chronic kidney disease (CKD) in Brazil. METHODS: We used the Brazilian Ministry of Health National Case Registry Database to identify patients with tuberculosis and CKD, treated between 2007 and 2011. The tuberculosis treatment outcomes were compared with epidemiological and clinical characteristics of the subjects using a hierarchical multinomial logistic regression model, in which cure was the reference outcome. RESULTS: The prevalence of CKD among patients with tuberculosis was 0.4% (95% CI: 0.37-0.42%). The sample comprised 1,077 subjects. The outcomes were cure, in 58%; treatment abandonment, in 7%; death from tuberculosis, in 13%; and death from other causes, in 22%. The characteristics that differentiated the ORs for treatment abandonment or death were age; alcoholism; AIDS; previous noncompliance with treatment; transfer to another facility; suspected tuberculosis on chest X-ray; positive results in the first smear microscopy; and indications for/use of directly observed treatment, short-course strategy. CONCLUSIONS: Our data indicate the importance of sociodemographic characteristics for the diagnosis of tuberculosis in patients with CKD and underscore the need for tuberculosis control strategies targeting patients with chronic noncommunicable diseases, such as CKD. PMID:24310632

  11. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  13. Predictors of Latent Trajectory Classes of Dating Violence Victimization

    PubMed Central

    Brooks-Russell, Ashley; Foshee, Vangie; Ennett, Susan

    2014-01-01

    This study identified classes of developmental trajectories of physical dating violence victimization from grades 8 to 12 and examined theoretically-based risk factors that distinguished among trajectory classes. Data were from a multi-wave longitudinal study spanning 8th through 12th grade (n = 2,566; 51.9% female). Growth mixture models were used to identify trajectory classes of physical dating violence victimization separately for girls and boys. Logistic and multinomial logistic regressions were used to identify situational and target vulnerability factors associated with the trajectory classes. For girls, three trajectory classes were identified: a low/non-involved class; a moderate class where victimization increased slightly until the 10th grade and then decreased through the 12th grade; and a high class where victimization started at a higher level in the 8th grade, increased substantially until the 10th grade, and then decreased until the 12th grade. For males, two classes were identified: a low/non-involved class, and a victimized class where victimization increased slightly until the 9th grade, decreased until the 11th grade, and then increased again through the 12th grade. In bivariate analyses, almost all of the situational and target vulnerability risk factors distinguished the victimization classes from the non-involved classes. However, when all risk factors and control variables were in the model, alcohol use (a situational vulnerability) was the only factor that distinguished membership in the moderate trajectory class from the non-involved class for girls; anxiety and being victimized by peers (target vulnerability factors) were the factors that distinguished the high from the non-involved classes for the girls; and victimization by peers was the only factor distinguishing the victimized from the non-involved class for boys. These findings contribute to our understanding of the heterogeneity in physical dating violence victimization during adolescence and the malleable risk factors associated with each trajectory class for boys and girls. PMID:23212350

  14. Solubility modeling of refrigerant/lubricant mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michels, H.H.; Sienel, T.H.

    1996-12-31

    A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less

  15. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  16. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    USGS Publications Warehouse

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  17. Process Dissociation and Mixture Signal Detection Theory

    ERIC Educational Resources Information Center

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  18. Investigating Approaches to Estimating Covariate Effects in Growth Mixture Modeling: A Simulation Study

    ERIC Educational Resources Information Center

    Li, Ming; Harring, Jeffrey R.

    2017-01-01

    Researchers continue to be interested in efficient, accurate methods of estimating coefficients of covariates in mixture modeling. Including covariates related to the latent class analysis not only may improve the ability of the mixture model to clearly differentiate between subjects but also makes interpretation of latent group membership more…

  19. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  20. Area variations in multiple morbidity using a life table methodology.

    PubMed

    Congdon, Peter

    Analysis of healthy life expectancy is typically based on a binary distinction between health and ill-health. By contrast, this paper considers spatial modelling of disease free life expectancy taking account of the number of chronic conditions. Thus the analysis is based on population sub-groups with no disease, those with one disease only, and those with two or more diseases (multiple morbidity). Data on health status is accordingly modelled using a multinomial likelihood. The analysis uses data for 258 small areas in north London, and shows wide differences in the disease burden related to multiple morbidity. Strong associations between area socioeconomic deprivation and multiple morbidity are demonstrated, as well as strong spatial clustering.

  1. Dietary and exercise change following acute cardiac syndrome onset: A latent class growth modelling analysis.

    PubMed

    Bennett, Paul; Gruszczynska, Ewa; Marke, Victoria

    2016-10-01

    The present study aim determine sub-group trajectories of change on measures of diet and exercise following acute coronary syndrome. 150 participants were assessed in hospital, 1 month and 6 months subsequently on measures including physical activity, diet, illness beliefs, coping and mood. Change trajectories were measured using latent class growth modelling. Multinomial logistic regression was used to predict class membership. These analyses revealed changes in exercise were confined to a sub-group of participants already reporting relatively high exercise levels; those eating less healthily evidenced modest dietary improvements. Coping, gender, depression and perceived control predicted group membership to a modest degree. © The Author(s) 2015.

  2. Dynamics and associations of microbial community types across the human body

    PubMed Central

    Ding, Tao; Schloss, Patrick D.

    2014-01-01

    A primary goal of the Human Microbiome Project (HMP) was to provide a reference collection of 16S rRNA gene sequences collected from sites across the human body that would allow microbiologists to better associate changes in the microbiome with changes in health 1. The HMP Consortium has reported the structure and function of the human microbiome in 300 healthy adults at 18 body sites from a single time point 2,3. Using additional data collected over the course of 12–18 months, we used Dirichlet multinomial mixture models 4 to partition the data into community types for each body site and made three important observations. First, there were strong associations between whether they had been breastfed as an infant, their gender, and their level of education with their community types at several body sites. Second, although the specific taxonomic compositions of the oral and gut microbiomes were different, the community types observed at these sites these sites were predictive of each other. Finally, over the course of the sampling period, the community types from sites within the oral cavity were the least stable, while those in the vagina and gut were the most stable. Our results demonstrate that even with the considerable intra- and inter-personal variation in the human microbiome, this variation can be partitioned into community types that are predictive of each other and are likely the result of life history characteristics. Understanding the diversity of community types and the mechanisms that result in an individual having a particular type or changing types, will allow us to use their community types to assess disease risk and to personalize therapies. PMID:24739969

  3. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  4. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  5. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  6. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  7. A general mixture model and its application to coastal sandbar migration simulation

    NASA Astrophysics Data System (ADS)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that the suspended load will also make great contributions to the topography change in the surf zone, which is usually neglected in some previous researches.

  8. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio, E-mail: drpqam@cid.csic.es

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study usedmore » the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of goitrogens.« less

  9. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    PubMed

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  11. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  12. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    PubMed

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  13. An NCME Instructional Module on Latent DIF Analysis Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol

    2016-01-01

    The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…

  14. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    ERIC Educational Resources Information Center

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  15. Effects of three veterinary antibiotics and their binary mixtures on two green alga species.

    PubMed

    Carusso, S; Juárez, A B; Moretton, J; Magdaleno, A

    2018-03-01

    The individual and combined toxicities of chlortetracycline (CTC), oxytetracycline (OTC) and enrofloxacin (ENF) have been examined in two green algae representative of the freshwater environment, the international standard strain Pseudokichneriella subcapitata and the native strain Ankistrodesmus fusiformis. The toxicities of the three antibiotics and their mixtures were similar in both strains, although low concentrations of ENF and CTC + ENF were more toxic in A. fusiformis than in the standard strain. The toxicological interactions of binary mixtures were predicted using the two classical models of additivity: Concentration Addition (CA) and Independent Action (IA), and compared to the experimentally determined toxicities over a range of concentrations between 0.1 and 10 mg L -1 . The CA model predicted the inhibition of algal growth in the three mixtures in P. subcapitata, and in the CTC + OTC and CTC + ENF mixtures in A. fusiformis. However, this model underestimated the experimental results obtained in the OTC + ENF mixture in A. fusiformis. The IA model did not predict the experimental toxicological effects of the three mixtures in either strain. The sum of the toxic units (TU) for the mixtures was calculated. According to these values, the binary mixtures CTC + ENF and OTC + ENF showed an additive effect, and the CTC + OTC mixture showed antagonism in P. subcapitata, whereas the three mixtures showed synergistic effects in A. fusiformis. Although A. fusiformis was isolated from a polluted river, it showed a similar sensitivity with respect to P. subcapitata when it was exposed to binary mixtures of antibiotics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Contraceptive Sterilization: Introducing A Couple Perspective to Examine Sociodemographic Differences in Use.

    PubMed

    Eeckhaut, Mieke C W

    2017-09-01

    Most studies of contraceptive use have relied solely on the woman's perspective, but because men's attitudes and preferences are also important, analytic approaches based on couples should also be explored. Data from the 2006-2010 and 2011-2013 rounds of the National Survey of Family Growth yielded a sample of 4,591 men and women who were married or cohabiting with an opposite-sex partner and who had completed their intended childbearing. Respondents' reports of both their own and their partners' characteristics and behaviors were employed in two sets of analyses examining educational and racial and ethnic differences in contraceptive use: an individualistic approach (using multinomial logistic regression) and a couple approach (using multinomial logistic diagonal reference models). In the full model using the individualistic approach, respondents with less than a high school education were less likely than those with at least a college degree to rely on male sterilization (odds ratios, 0.1-0.2) or a reversible method (0.4-0.5), as opposed to female sterilization. Parallel analyses limited to couples in which partners had the same educational levels (i.e., educationally homogamous couples) showed an even greater difference between those with the least and those with the most schooling (0.03 for male sterilization and 0.2 for a reversible method). When race and ethnicity, which had a much higher level of homogamy, were examined, the approaches yielded more similar results. Research on contraceptive use can benefit from a couple approach, particularly when focusing on partners' characteristics for which homogamy is relatively low. Copyright © 2017 by the Guttmacher Institute.

  17. Numeric score-based conditional and overall change-in-status indices for ordered categorical data.

    PubMed

    Lyles, Robert H; Kupper, Lawrence L; Barnhart, Huiman X; Martin, Sandra L

    2015-11-30

    Planned interventions and/or natural conditions often effect change on an ordinal categorical outcome (e.g., symptom severity). In such scenarios, it is sometimes desirable to assign a priori scores to observed changes in status, typically giving higher weight to changes of greater magnitude. We define change indices for such data based upon a multinomial model for each row of a c × c table, where the rows represent the baseline status categories. We distinguish an index designed to assess conditional changes within each baseline category from two others designed to capture overall change. One of these overall indices measures expected change across a target population. The other is scaled to capture the proportion of total possible change in the direction indicated by the data, so that it ranges from -1 (when all subjects finish in the least favorable category) to +1 (when all finish in the most favorable category). The conditional assessment of change can be informative regardless of how subjects are sampled into the baseline categories. In contrast, the overall indices become relevant when subjects are randomly sampled at baseline from the target population of interest, or when the investigator is able to make certain assumptions about the baseline status distribution in that population. We use a Dirichlet-multinomial model to obtain Bayesian credible intervals for the conditional change index that exhibit favorable small-sample frequentist properties. Simulation studies illustrate the methods, and we apply them to examples involving changes in ordinal responses for studies of sleep deprivation and activities of daily living. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Women's health in a rural community in Kerala, India: do caste and socioeconomic position matter?

    PubMed Central

    Mohindra, K S; Haddad, Slim; Narayana, D

    2006-01-01

    Objectives To examine the social patterning of women's self‐reported health status in India and the validity of the two hypotheses: (1) low caste and lower socioeconomic position is associated with worse reported health status, and (2) associations between socioeconomic position and reported health status vary across castes. Design Cross‐sectional household survey, age‐adjusted percentages and odds ratios, and multilevel multinomial logistic regression models were used for analysis. Setting A panchayat (territorial decentralised unit) in Kerala, India, in 2003. Participants 4196 non‐elderly women. Outcome measures Self‐perceived health status and reported limitations in activities in daily living. Results Women from lower castes (scheduled castes/scheduled tribes (SC/ST) and other backward castes (OBC) reported a higher prevalence of poor health than women from forward castes. Socioeconomic inequalities were observed in health regardless of the indicators, education, women's employment status or household landholdings. The multilevel multinomial models indicate that the associations between socioeconomic indicators and health vary across caste. Among SC/ST and OBC women, the influence of socioeconomic variables led to a “magnifying” effect, whereas among forward caste women, a “buffering” effect was found. Among lower caste women, the associations between socioeconomic factors and self‐assessed health are graded; the associations are strongest when comparing the lowest and highest ratings of health. Conclusions Even in a relatively egalitarian state in India, there are caste and socioeconomic inequalities in women's health. Implementing interventions that concomitantly deal with caste and socioeconomic disparities will likely produce more equitable results than targeting either type of inequality in isolation. PMID:17108296

  19. Quality of life of patients from rural and urban areas in Poland with head and neck cancer treated with radiotherapy. A study of the influence of selected socio-demographic factors.

    PubMed

    Depta, Adam; Jewczak, Maciej; Skura-Madziała, Anna

    2017-10-01

    The quality of life (QoL) experienced by cancer patients depends both on their state of health and on sociodemographic factors. Tumours in the head and neck region have a particularly adverse effect on patients psychologically and on their social functioning. The study involved 121 patients receiving radiotherapy treatment for head and neck cancers. They included 72 urban and 49 rural residents. QoL was assessed using the questionnaires EORTC-QLQ-C30 and QLQ-H&N35. The data were analysed using statistical methods: a χ 2 test for independence and a multinomial logit model. The evaluation of QoL showed a strong, statistically significant, positive dependence on state of health, and a weak dependence on sociodemographic factors and place of residence. Evaluations of financial situation and living conditions were similar for rural and urban residents. Patients from urban areas had the greatest anxiety about deterioration of their state of health. Rural respondents were more often anxious about a worsening of their financial situation, and expressed a fear of loneliness. Studying the QoL of patients with head and neck cancer provides information concerning the areas in which the disease inhibits their lives, and the extent to which it does so. It indicates conditions for the adaptation of treatment and care methods in the healthcare system which might improve the QoL of such patients. A multinomial logit model identifies the factors determining the patients' health assessment and defines the probable values of such assessment.

  20. Hospital financial position and the adoption of electronic health records.

    PubMed

    Ginn, Gregory O; Shen, Jay J; Moseley, Charles B

    2011-01-01

    The objective of this study was to examine the relationship between financial position and adoption of electronic health records (EHRs) in 2442 acute care hospitals. The study was cross-sectional and utilized a general linear mixed model with the multinomial distribution specification for data analysis. We verified the results by also running a multinomial logistic regression model. To measure our variables, we used data from (1) the 2007 American Hospital Association (AHA) electronic health record implementation survey, (2) the 2006 Centers for Medicare and Medicaid Cost Reports, and (3) the 2006 AHA Annual Survey containing organizational and operational data. Our dependent variable was an ordinal variable with three levels used to indicate the extent of EHR adoption by hospitals. Our independent variables were five financial ratios: (1) net days revenue in accounts receivable, (2) total margin, (3) the equity multiplier, (4) total asset turnover, and (5) the ratio of total payroll to total expenses. For control variables, we used (1) bed size, (2) ownership type, (3) teaching affiliation, (4) system membership, (5) network participation, (6) fulltime equivalent nurses per adjusted average daily census, (7) average daily census per staffed bed, (8) Medicare patients percentage, (9) Medicaid patients percentage, (10) capitation-based reimbursement, and (11) nonconcentrated market. Only liquidity was significant and positively associated with EHR adoption. Asset turnover ratio was significant but, unexpectedly, was negatively associated with EHR adoption. However, many control variables, most notably bed size, showed significant positive associations with EHR adoption. Thus, it seems that hospitals adopt EHRs as a strategic move to better align themselves with their environment.

  1. Vitamin D status by sociodemographic factors and body mass index in Mexican women at reproductive age.

    PubMed

    Contreras-Manzano, Alejandra; Villalpando, Salvador; Robledo-Pérez, Ricardo

    2017-01-01

    To describe the prevalence of Vitamin D deficiency (VDD) and insufficiency (VDI), and the main dietary sources of vitamin D (VD) in a probabilistic sample of Mexican women at reproductive age participating in Ensanut 2012, stratified by sociodemographic factors and body mass index (BMI) categories. Serum concentrations of 25-hydroxyvitamin-D(25-OH-D) were determined using an ELISA technique in 4162 women participants of Ensanut 2012 and classified as VDD, VDI or optimal VD status. Sociodemographic, anthropometric and dietary data were also collected. The association between VDD/VDI and sociodemographic and anthropometry factors was assessed adjusting for potential confounders through an estimation of a multinomial logistic regression model. The prevalence of VDD was 36.8%, and that of VDI was 49.8%. The mean dietary intake of VD was 2.56 μg/d. The relative risk ratio (RRR) of VDD or VDI was calculated by a multinomial logistic regression model in 4162 women. The RRR of VDD or VDI were significantly higher in women with overweight (RRR: 1.85 and 1.44, p<0.05), obesity (RRR: 2.94 and 1.93, p<0.001), urban dwelling (RRR:1.68 and 1.31, p<0.06), belonging to the 3rd tertile of income (RRR: 5.32 and 2.22, p<0.001), or of indigenous ethnicity (RRR: 2.86 and 1.70, p<0.05), respectively. The high prevalence of VDD/VDI in Mexican women calls for stronger actions from the health authorities, strengthtening the actual policy of food supplementation and recommending a reasonable amount of sun exposure.

  2. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  3. Recognition and source memory as multivariate decision processes.

    PubMed

    Banks, W P

    2000-07-01

    Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.

  4. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  5. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  6. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  7. Mixed-up trees: the structure of phylogenetic mixtures.

    PubMed

    Matsen, Frederick A; Mossel, Elchanan; Steel, Mike

    2008-05-01

    In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.

  8. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  9. Added sugars and periodontal disease in young adults: an analysis of NHANES III data.

    PubMed

    Lula, Estevam C O; Ribeiro, Cecilia C C; Hugo, Fernando N; Alves, Cláudia M C; Silva, Antônio A M

    2014-10-01

    Added sugar consumption seems to trigger a hyperinflammatory state and may result in visceral adiposity, dyslipidemia, and insulin resistance. These conditions are risk factors for periodontal disease. However, the role of sugar intake in the cause of periodontal disease has not been adequately studied. We evaluated the association between the frequency of added sugar consumption and periodontal disease in young adults by using NHANES III data. Data from 2437 young adults (aged 18-25 y) who participated in NHANES III (1988-1994) were analyzed. We estimated the frequency of added sugar consumption by using food-frequency questionnaire responses. We considered periodontal disease to be present in teeth with bleeding on probing and a probing depth ≥3 mm at one or more sites. We evaluated this outcome as a discrete variable in Poisson regression models and as a categorical variable in multinomial logistic regression models adjusted for sex, age, race-ethnicity, education, poverty-income ratio, tobacco exposure, previous diagnosis of diabetes, and body mass index. A high consumption of added sugars was associated with a greater prevalence of periodontal disease in middle [prevalence ratio (PR): 1.39; 95% CI: 1.02, 1.89] and upper (PR: 1.42; 95% CI: 1.08, 1.85) tertiles of consumption in the adjusted Poisson regression model. The upper tertile of added sugar intake was associated with periodontal disease in ≥2 teeth (PR: 1.73; 95% CI: 1.19, 2.52) but not with periodontal disease in only one tooth (PR: 0.85; 95% CI: 0.54, 1.34) in the adjusted multinomial logistic regression model. A high frequency of consumption of added sugars is associated with periodontal disease, independent of traditional risk factors, suggesting that this consumption pattern may contribute to the systemic inflammation observed in periodontal disease and associated noncommunicable diseases. © 2014 American Society for Nutrition.

  10. Investigation on occupant injury severity in rear-end crashes involving trucks as the front vehicle in Beijing area, China.

    PubMed

    Yuan, Quan; Lu, Meng; Theofilatos, Athanasios; Li, Yi-Bing

    2017-02-01

    Rear-end crashes attribute to a large portion of total crashes in China, which lead to many casualties and property damage, especially when involving commercial vehicles. This paper aims to investigate the critical factors for occupant injury severity in the specific rear-end crash type involving trucks as the front vehicle (FV). This paper investigated crashes occurred from 2011 to 2013 in Beijing area, China and selected 100 qualified cases i.e., rear-end crashes involving trucks as the FV. The crash data were supplemented with interviews from police officers and vehicle inspection. A binary logistic regression model was used to build the relationship between occupant injury severity and corresponding affecting factors. Moreover, a multinomial logistic model was used to predict the likelihood of fatal or severe injury or no injury in a rear-end crash. The results provided insights on the characteristics of driver, vehicle and environment, and the corresponding influences on the likelihood of a rear-end crash. The binary logistic model showed that drivers' age, weight difference between vehicles, visibility condition and lane number of road significantly increased the likelihood for severe injury of rear-end crash. The multinomial logistic model and the average direct pseudo-elasticity of variables showed that night time, weekdays, drivers from other provinces and passenger vehicles as rear vehicles significantly increased the likelihood of rear drivers being fatal. All the abovementioned significant factors should be improved, such as the conditions of lighting and the layout of lanes on roads. Two of the most common driver factors are drivers' age and drivers' original residence. Young drivers and outsiders have a higher injury severity. Therefore it is imperative to enhance the safety education and management on the young drivers who steer heavy duty truck from other cities to Beijing on weekdays. Copyright © 2016 Daping Hospital and the Research Institute of Surgery of the Third Military Medical University. Production and hosting by Elsevier B.V. All rights reserved.

  11. New approach in direct-simulation of gas mixtures

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren

    1991-01-01

    Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.

  12. Investigation of Dalton and Amagat's laws for gas mixtures with shock propagation

    NASA Astrophysics Data System (ADS)

    Wayne, Patrick; Trueba Monje, Ignacio; Yoo, Jason H.; Truman, C. Randall; Vorobieff, Peter

    2016-11-01

    Two common models describing gas mixtures are Dalton's Law and Amagat's Law (also known as the laws of partial pressures and partial volumes, respectively). Our work is focused on determining the suitability of these models to prediction of effects of shock propagation through gas mixtures. Experiments are conducted at the Shock Tube Facility at the University of New Mexico (UNM). To validate experimental data, possible sources of uncertainty associated with experimental setup are identified and analyzed. The gaseous mixture of interest consists of a prescribed combination of disparate gases - helium and sulfur hexafluoride (SF6). The equations of state (EOS) considered are the ideal gas EOS for helium, and a virial EOS for SF6. The values for the properties provided by these EOS are then used used to model shock propagation through the mixture in accordance with Dalton's and Amagat's laws. Results of the modeling are compared with experiment to determine which law produces better agreement for the mixture. This work is funded by NNSA Grant DE-NA0002913.

  13. Bayesian 2-Stage Space-Time Mixture Modeling With Spatial Misalignment of the Exposure in Small Area Health Data.

    PubMed

    Lawson, Andrew B; Choi, Jungsoon; Cai, Bo; Hossain, Monir; Kirby, Russell S; Liu, Jihong

    2012-09-01

    We develop a new Bayesian two-stage space-time mixture model to investigate the effects of air pollution on asthma. The two-stage mixture model proposed allows for the identification of temporal latent structure as well as the estimation of the effects of covariates on health outcomes. In the paper, we also consider spatial misalignment of exposure and health data. A simulation study is conducted to assess the performance of the 2-stage mixture model. We apply our statistical framework to a county-level ambulatory care asthma data set in the US state of Georgia for the years 1999-2008.

  14. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  15. Some comments on thermodynamic consistency for equilibrium mixture equations of state

    DOE PAGES

    Grove, John W.

    2018-03-28

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  16. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  17. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  18. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  19. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  20. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  1. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  2. Single- and mixture toxicity of three organic UV-filters, ethylhexyl methoxycinnamate, octocrylene, and avobenzone on Daphnia magna.

    PubMed

    Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun

    2017-03-01

    In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    NASA Astrophysics Data System (ADS)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  4. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    PubMed

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture exposures, and the consideration of mixture effects when setting water quality guidelines for this class of pesticides. Environ Toxicol Chem 2017;36:3091-3101. © 2017 SETAC. © 2017 SETAC.

  5. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  6. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  7. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  8. Predicting the shock compression response of heterogeneous powder mixtures

    NASA Astrophysics Data System (ADS)

    Fredenburg, D. A.; Thadhani, N. N.

    2013-06-01

    A model framework for predicting the dynamic shock-compression response of heterogeneous powder mixtures using readily obtained measurements from quasi-static tests is presented. Low-strain-rate compression data are first analyzed to determine the region of the bulk response over which particle rearrangement does not contribute to compaction. This region is then fit to determine the densification modulus of the mixture, σD, an newly defined parameter describing the resistance of the mixture to yielding. The measured densification modulus, reflective of the diverse yielding phenomena that occur at the meso-scale, is implemented into a rate-independent formulation of the P-α model, which is combined with an isobaric equation of state to predict the low and high stress dynamic compression response of heterogeneous powder mixtures. The framework is applied to two metal + metal-oxide (thermite) powder mixtures, and good agreement between the model and experiment is obtained for all mixtures at stresses near and above those required to reach full density. At lower stresses, rate-dependencies of the constituents, and specifically those of the matrix constituent, determine the ability of the model to predict the measured response in the incomplete compaction regime.

  9. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  10. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    NASA Astrophysics Data System (ADS)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation. Furthermore, elastic properties are obtainable by general Hashin-Shtrikman-Walpole bounds. The predicted results by this new mixture model are qualitatively consistent with laboratory measurements and well log obtained for unconsolidated sediments. Acknowledgement: A part of this study was accomplished with a subsidy of River Environment Fund of Japan.

  11. Willingness to pay for midwife-endorsed product: An Australian best-worst study.

    PubMed

    Lahtinen, Ville; Rundle-Thiele, Sharyn; Adamsen, Jannie Mia

    2016-01-01

    This article examined the impact of midwife endorsement on stated choice preferences in one of the highest volume baby care product categories, diapers. An online survey was conducted testing 12 alternatives of which six were midwife endorsed. A total of 215 responses were analyzed using best-worst and multinomial logit modeling. Results indicate that package size, price, and brand are more sensitive predictors of stated choice preferences than midwife endorsement. Respondents were willing to pay 2.3% more for a diaper that was endorsed by midwives. These findings suggest that midwife endorsement should be pursued by health marketers.

  12. Analyzing Data for Systems Biology: Working at the Intersection of Thermodynamics and Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, William R.; Baxter, Douglas J.

    2012-08-15

    Many challenges in systems biology have to do with analyzing data within the framework of molecular phenomena and cellular pathways. How does this relate to thermodynamics that we know govern the behavior of molecules? Making progress in relating data analysis to thermodynamics is essential in systems biology if we are to build predictive models that enable the field of synthetic biology. This report discusses work at the crossroads of thermodynamics and data analysis, and demonstrates that statistical mechanical free energy is a multinomial log likelihood. Applications to systems biology are presented.

  13. A numerical study of granular dam-break flow

    NASA Astrophysics Data System (ADS)

    Pophet, N.; Rébillout, L.; Ozeren, Y.; Altinakar, M.

    2017-12-01

    Accurate prediction of granular flow behavior is essential to optimize mitigation measures for hazardous natural granular flows such as landslides, debris flows and tailings-dam break flows. So far, most successful models for these types of flows focus on either pure granular flows or flows of saturated grain-fluid mixtures by employing a constant friction model or more complex rheological models. These saturated models often produce non-physical result when they are applied to simulate flows of partially saturated mixtures. Therefore, more advanced models are needed. A numerical model was developed for granular flow employing a constant friction and μ(I) rheology (Jop et al., J. Fluid Mech. 2005) coupled with a groundwater flow model for seepage flow. The granular flow is simulated by solving a mixture model using Finite Volume Method (FVM). The Volume-of-Fluid (VOF) technique is used to capture the free surface motion. The constant friction and μ(I) rheological models are incorporated in the mixture model. The seepage flow is modeled by solving Richards equation. A framework is developed to couple these two solvers in OpenFOAM. The model was validated and tested by reproducing laboratory experiments of partially and fully channelized dam-break flows of dry and initially saturated granular material. To obtain appropriate parameters for rheological models, a series of simulations with different sets of rheological parameters is performed. The simulation results obtained from constant friction and μ(I) rheological models are compared with laboratory experiments for granular free surface interface, front position and velocity field during the flows. The numerical predictions indicate that the proposed model is promising in predicting dynamics of the flow and deposition process. The proposed model may provide more reliable insight than the previous assumed saturated mixture model, when saturated and partially saturated portions of granular mixture co-exist.

  14. Comparing the efficiency of digital and conventional soil mapping to predict soil types in a semi-arid region in Iran

    NASA Astrophysics Data System (ADS)

    Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter

    2017-05-01

    The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.

  15. Mixture theory-based poroelasticity as a model of interstitial tissue growth

    PubMed Central

    Cowin, Stephen C.; Cardoso, Luis

    2011-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481

  16. Mixture theory-based poroelasticity as a model of interstitial tissue growth.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2012-01-01

    This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues.

  17. Bacterial diversity among four healthcare-associated institutes in Taiwan.

    PubMed

    Chen, Chang-Hua; Lin, Yaw-Ling; Chen, Kuan-Hsueh; Chen, Wen-Pei; Chen, Zhao-Feng; Kuo, Han-Yueh; Hung, Hsueh-Fen; Tang, Chuan Yi; Liou, Ming-Li

    2017-08-15

    Indoor microbial communities have important implications for human health, especially in health-care institutes (HCIs). The factors that determine the diversity and composition of microbiomes in a built environment remain unclear. Herein, we used 16S rRNA amplicon sequencing to investigate the relationships between building attributes and surface bacterial communities among four HCIs located in three buildings. We examined the surface bacterial communities and environmental parameters in the buildings supplied with different ventilation types and compared the results using a Dirichlet multinomial mixture (DMM)-based approach. A total of 203 samples from the four HCIs were analyzed. Four bacterial communities were grouped using the DMM-based approach, which were highly similar to those in the 4 HCIs. The α-diversity and β-diversity in the naturally ventilated building were different from the conditioner-ventilated building. The bacterial source composition varied across each building. Nine genera were found as the core microbiota shared by all the areas, of which Acinetobacter, Enterobacter, Pseudomonas, and Staphylococcus are regarded as healthcare-associated pathogens (HAPs). The observed relationship between environmental parameters such as core microbiota and surface bacterial diversity suggests that we might manage indoor environments by creating new sanitation protocols, adjusting the ventilation design, and further understanding the transmission routes of HAPs.

  18. Religion, contraception, and method choice of married women in Ghana.

    PubMed

    Gyimah, Stephen Obeng; Adjei, Jones K; Takyi, Baffour K

    2012-12-01

    Using pooled data from the 1998 and 2003 Demographic and Health Surveys, this paper investigates the association between religion and contraceptive behavior of married women in Ghana. Guided by the particularized theology and characteristics hypotheses, multinomial logit and complementary log-log models are used to explore denominational differences in contraceptive adoption among currently married women and assess whether the differences could be explained through other characteristics. We found that while there were no differences between women of different Christian faiths, non-Christian women (Muslim and Traditional) were significantly more likely to have never used contraception compared with Christian women. Similar observations were made on current use of contraception, although the differences were greatly reduced in the multivariate models.

  19. Prediction of Nursing Workload in Hospital.

    PubMed

    Fiebig, Madlen; Hunstein, Dirk; Bartholomeyczik, Sabine

    2018-01-01

    A dissertation project at the Witten/Herdecke University [1] is investigating which (nursing sensitive) patient characteristics are suitable for predicting a higher or lower degree of nursing workload. For this research project four predictive modelling methods were selected. In a first step, SUPPORT VECTOR MACHINE, RANDOM FOREST, and GRADIENT BOOSTING were used to identify potential predictors from the nursing sensitive patient characteristics. The results were compared via FEATURE IMPORTANCE. To predict nursing workload the predictors identified in step 1 were modelled using MULTINOMIAL LOGISTIC REGRESSION. First results from the data mining process will be presented. A prognostic determination of nursing workload can be used not only as a basis for human resource planning in hospital, but also to respond to health policy issues.

  20. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  1. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  2. Clinicians' adherence to clinical practice guidelines for cardiac function monitoring during antipsychotic treatment: a retrospective report on 434 patients with severe mental illness.

    PubMed

    Manchia, Mirko; Firinu, Giorgio; Carpiniello, Bernardo; Pinna, Federica

    2017-03-31

    Severe mental illness (SMI) has considerable excess morbidity and mortality, a proportion of which is explained by cardiovascular diseases, caused in part by antipsychotic (AP) induced QT-related arrhythmias and sudden death by Torsade de Point (TdP). The implementation of evidence-based recommendations for cardiac function monitoring might reduce the incidence of these AP-related adverse events. To investigate clinicians' adherence to cardiac function monitoring before and after starting AP, we performed a retrospective assessment of 434 AP-treated SMI patients longitudinally followed-up for 5 years at an academic community mental health center. We classified antipsychotics according to their risk of inducing QT-related arrhythmias and TdP (Center for Research on Therapeutics, University of Arizona). We used univariate tests and multinomial or binary logistic regression model for data analysis. Univariate and multinomial regression analysis showed that psychiatrists were more likely to perform pre-treatment electrocardiogram (ECG) and electrolyte testing with AP carrying higher cardiovascular risk, but not on the basis of AP pharmacological class. Univariate and binomial regression analysis showed that cardiac function parameters (ECG and electrolyte balance) were more frequently monitored during treatment with second generation AP than with first generation AP. Our data show the presence of weaknesses in the cardiac function monitoring of AP-treated SMI patients, and might guide future interventions to tackle them.

  3. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  4. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  5. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  6. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, S.; Tebby, C.

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-timemore » cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.« less

  7. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  8. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  9. Mixture models for estimating the size of a closed population when capture rates vary among individuals

    USGS Publications Warehouse

    Dorazio, R.M.; Royle, J. Andrew

    2003-01-01

    We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.

  10. Chemical mixtures in potable water in the U.S.

    USGS Publications Warehouse

    Ryker, Sarah J.

    2014-01-01

    In recent years, regulators have devoted increasing attention to health risks from exposure to multiple chemicals. In 1996, the US Congress directed the US Environmental Protection Agency (EPA) to study mixtures of chemicals in drinking water, with a particular focus on potential interactions affecting chemicals' joint toxicity. The task is complicated by the number of possible mixtures in drinking water and lack of toxicological data for combinations of chemicals. As one step toward risk assessment and regulation of mixtures, the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) have proposed to estimate mixtures' toxicity based on the interactions of individual component chemicals. This approach permits the use of existing toxicological data on individual chemicals, but still requires additional information on interactions between chemicals and environmental data on the public's exposure to combinations of chemicals. Large compilations of water-quality data have recently become available from federal and state agencies. This chapter demonstrates the use of these environmental data, in combination with the available toxicological data, to explore scenarios for mixture toxicity and develop priorities for future research and regulation. Occurrence data on binary and ternary mixtures of arsenic, cadmium, and manganese are used to parameterize the EPA and ATSDR models for each drinking water source in the dataset. The models' outputs are then mapped at county scale to illustrate the implications of the proposed models for risk assessment and rulemaking. For example, according to the EPA's interaction model, the levels of arsenic and cadmium found in US groundwater are unlikely to have synergistic cardiovascular effects in most areas of the country, but the same mixture's potential for synergistic neurological effects merits further study. Similar analysis could, in future, be used to explore the implications of alternative risk models for the toxicity and interaction of complex mixtures, and to identify the communities with the highest and lowest expected value for regulation of chemical mixtures.

  11. Automated Detection of Diabetic Retinopathy using Deep Learning.

    PubMed

    Lam, Carson; Yi, Darvin; Guo, Margaret; Lindsey, Tony

    2018-01-01

    Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively.

  12. Testing and Improving Theories of Radiative Transfer for Determining the Mineralogy of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Gudmundsson, E.; Ehlmann, B. L.; Mustard, J. F.; Hiroi, T.; Poulet, F.

    2012-12-01

    Two radiative transfer theories, the Hapke and Shkuratov models, have been used to estimate the mineralogic composition of laboratory mixtures of anhydrous mafic minerals from reflected near-infrared light, accurately modeling abundances to within 10%. For this project, we tested the efficacy of the Hapke model for determining the composition of mixtures (weight fraction, particle diameter) containing hydrous minerals, including phyllosilicates. Modal mineral abundances for some binary mixtures were modeled to +/-10% of actual values, but other mixtures showed higher inaccuracies (up to 25%). Consequently, a sensitivity analysis of selected input and model parameters was performed. We first examined the shape of the model's error function (RMS error between modeled and measured spectra) over a large range of endmember weight fractions and particle diameters and found that there was a single global minimum for each mixture (rather than local minima). The minimum was sensitive to modeled particle diameter but comparatively insensitive to modeled endmember weight fraction. Derivation of the endmembers' k optical constant spectra using the Hapke model showed differences with the Shkuratov-derived optical constants originally used. Model runs with different sets of optical constants suggest that slight differences in the optical constants used significantly affect the accuracy of model predictions. Even for mixtures where abundance was modeled correctly, particle diameter agreed inconsistently with sieved particle sizes and varied greatly for individual mix within suite. Particle diameter was highly sensitive to the optical constants, possibly indicating that changes in modeled path length (proportional to particle diameter) compensate for changes in the k optical constant. Alternatively, it may not be appropriate to model path length and particle diameter with the same proportionality for all materials. Across mixtures, RMS error increased in proportion to the fraction of the darker endmember. Analyses are ongoing and further studies will investigate the effect of sample hydration, permitted variability in particle size, assumed photometric functions and use of different wavelength ranges on model results. Such studies will advance understanding of how to best apply radiative transfer modeling to geologically complex planetary surfaces. Corresponding authors: eyjolfur88@gmail.com, ehlmann@caltech.edu

  13. Applying mixture toxicity modelling to predict bacterial bioluminescence inhibition by non-specifically acting pharmaceuticals and specifically acting antibiotics.

    PubMed

    Neale, Peta A; Leusch, Frederic D L; Escher, Beate I

    2017-04-01

    Pharmaceuticals and antibiotics co-occur in the aquatic environment but mixture studies to date have mainly focused on pharmaceuticals alone or antibiotics alone, although differences in mode of action may lead to different effects in mixtures. In this study we used the Bacterial Luminescence Toxicity Screen (BLT-Screen) after acute (0.5 h) and chronic (16 h) exposure to evaluate how non-specifically acting pharmaceuticals and specifically acting antibiotics act together in mixtures. Three models were applied to predict mixture toxicity including concentration addition, independent action and the two-step prediction (TSP) model, which groups similarly acting chemicals together using concentration addition, followed by independent action to combine the two groups. All non-antibiotic pharmaceuticals had similar EC 50 values at both 0.5 and 16 h, indicating together with a QSAR (Quantitative Structure-Activity Relationship) analysis that they act as baseline toxicants. In contrast, the antibiotics' EC 50 values decreased by up to three orders of magnitude after 16 h, which can be explained by their specific effect on bacteria. Equipotent mixtures of non-antibiotic pharmaceuticals only, antibiotics only and both non-antibiotic pharmaceuticals and antibiotics were prepared based on the single chemical results. The mixture toxicity models were all in close agreement with the experimental results, with predicted EC 50 values within a factor of two of the experimental results. This suggests that concentration addition can be applied to bacterial assays to model the mixture effects of environmental samples containing both specifically and non-specifically acting chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. An empirical comparison of methods for analyzing correlated data from a discrete choice survey to elicit patient preference for colorectal cancer screening

    PubMed Central

    2012-01-01

    Background A discrete choice experiment (DCE) is a preference survey which asks participants to make a choice among product portfolios comparing the key product characteristics by performing several choice tasks. Analyzing DCE data needs to account for within-participant correlation because choices from the same participant are likely to be similar. In this study, we empirically compared some commonly-used statistical methods for analyzing DCE data while accounting for within-participant correlation based on a survey of patient preference for colorectal cancer (CRC) screening tests conducted in Hamilton, Ontario, Canada in 2002. Methods A two-stage DCE design was used to investigate the impact of six attributes on participants' preferences for CRC screening test and willingness to undertake the test. We compared six models for clustered binary outcomes (logistic and probit regressions using cluster-robust standard error (SE), random-effects and generalized estimating equation approaches) and three models for clustered nominal outcomes (multinomial logistic and probit regressions with cluster-robust SE and random-effects multinomial logistic model). We also fitted a bivariate probit model with cluster-robust SE treating the choices from two stages as two correlated binary outcomes. The rank of relative importance between attributes and the estimates of β coefficient within attributes were used to assess the model robustness. Results In total 468 participants with each completing 10 choices were analyzed. Similar results were reported for the rank of relative importance and β coefficients across models for stage-one data on evaluating participants' preferences for the test. The six attributes ranked from high to low as follows: cost, specificity, process, sensitivity, preparation and pain. However, the results differed across models for stage-two data on evaluating participants' willingness to undertake the tests. Little within-patient correlation (ICC ≈ 0) was found in stage-one data, but substantial within-patient correlation existed (ICC = 0.659) in stage-two data. Conclusions When small clustering effect presented in DCE data, results remained robust across statistical models. However, results varied when larger clustering effect presented. Therefore, it is important to assess the robustness of the estimates via sensitivity analysis using different models for analyzing clustered data from DCE studies. PMID:22348526

  15. Analysis of brute-force break-ins of a palmprint authentication system.

    PubMed

    Kong, Adams W K; Zhang, David; Kamel, Mohamed

    2006-10-01

    Biometric authentication systems are widely applied because they offer inherent advantages over classical knowledge-based and token-based personal-identification approaches. This has led to the development of products using palmprints as biometric traits and their use in several real applications. However, as biometric systems are vulnerable to replay, database, and brute-force attacks, such potential attacks must be analyzed before biometric systems are massively deployed in security systems. This correspondence proposes a projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system. To validate the proposed model, we have conducted a simulation. Its results demonstrate that the proposed model can accurately estimate the probability. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    We investigate sufficient conditions for thermodynamic consistency for equilibrium mixtures. Such models assume that the mass fraction average of the material component equations of state, when closed by a suitable equilibrium condition, provide a composite equation of state for the mixture. Here, we show that the two common equilibrium models of component pressure/temperature equilibrium and volume/temperature equilibrium (Dalton, 1808) define thermodynamically consistent mixture equations of state and that other equilibrium conditions can be thermodynamically consistent provided appropriate values are used for the mixture specific entropy and pressure.

  17. Estimating and modeling the cure fraction in population-based cancer survival analysis.

    PubMed

    Lambert, Paul C; Thompson, John R; Weston, Claire L; Dickman, Paul W

    2007-07-01

    In population-based cancer studies, cure is said to occur when the mortality (hazard) rate in the diseased group of individuals returns to the same level as that expected in the general population. The cure fraction (the proportion of patients cured of disease) is of interest to patients and is a useful measure to monitor trends in survival of curable disease. There are 2 main types of cure fraction model, the mixture cure fraction model and the non-mixture cure fraction model, with most previous work concentrating on the mixture cure fraction model. In this paper, we extend the parametric non-mixture cure fraction model to incorporate background mortality, thus providing estimates of the cure fraction in population-based cancer studies. We compare the estimates of relative survival and the cure fraction between the 2 types of model and also investigate the importance of modeling the ancillary parameters in the selected parametric distribution for both types of model.

  18. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  19. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    NASA Astrophysics Data System (ADS)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model indicates that increased γ-γ attraction can raise γ-α mixture light scattering far more than it does for solutions of γ-crystallin alone, and can produce marked turbidity tens of degrees celsius above liquid-liquid separation.

  20. A new strategy to analyze possible association structures between dynamic nocturnal hormone activities and sleep alterations in humans.

    PubMed

    Kalus, Stefanie; Kneib, Thomas; Steiger, Axel; Holsboer, Florian; Yassouridis, Alexander

    2009-04-01

    The human sleep process shows dynamic alterations during the night. Methods are needed to examine whether and to what extent such alterations are affected by internal, possibly time-dependent, factors, such as endocrine activity. In an observational study, we examined simultaneously sleep EEG and nocturnal levels of renin, growth hormone (GH), and cortisol (between 2300 and 0700) in 47 healthy volunteers comprising 24 women (41.67 +/- 2.93 yr of age) and 23 men (37.26 +/- 2.85 yr of age). Hormone concentrations were measured every 20 min. Conventional sleep stage scoring at 30-s intervals was applied. Semiparametric multinomial logit models are used to study and quantify possible time-dependent hormone effects on sleep stage transition courses. Results show that increased cortisol levels decrease the probability of transition from rapid-eye-movement (REM) sleep to wakefulness (WAKE) and increase the probability of transition from REM to non-REM (NREM) sleep, irrespective of the time in the night. Via the model selection criterion Akaike's information criterion, it was found that all considered hormone effects on transition probabilities with the initial state WAKE change with time. Similarly, transition from slow-wave sleep (SWS) to light sleep (LS) is affected by a "hormone-time" interaction for cortisol and renin, but not GH. For example, there is a considerable increase in the probability of SWS-LS transition toward the end of the night, when cortisol concentrations are very high. In summary, alterations in human sleep possess dynamic forms and are partially influenced by the endocrine activity of certain hormones. Statistical methods, such as semiparametric multinomial and time-dependent logit regression, can offer ambitious ways to investigate and estimate the association intensities between the nonstationary sleep changes and the time-dependent endocrine activities.

  1. Expert Elicitation of Multinomial Probabilities for Decision-Analytic Modeling: An Application to Rates of Disease Progression in Undiagnosed and Untreated Melanoma.

    PubMed

    Wilson, Edward C F; Usher-Smith, Juliet A; Emery, Jon; Corrie, Pippa G; Walter, Fiona M

    2018-06-01

    Expert elicitation is required to inform decision making when relevant "better quality" data either do not exist or cannot be collected. An example of this is to inform decisions as to whether to screen for melanoma. A key input is the counterfactual, in this case the natural history of melanoma in patients who are undiagnosed and hence untreated. To elicit expert opinion on the probability of disease progression in patients with melanoma that is undetected and hence untreated. A bespoke webinar-based expert elicitation protocol was administered to 14 participants in the United Kingdom, Australia, and New Zealand, comprising 12 multinomial questions on the probability of progression from one disease stage to another in the absence of treatment. A modified Connor-Mosimann distribution was fitted to individual responses to each question. Individual responses were pooled using a Monte-Carlo simulation approach. Participants were asked to provide feedback on the process. A pooled modified Connor-Mosimann distribution was successfully derived from participants' responses. Feedback from participants was generally positive, with 86% willing to take part in such an exercise again. Nevertheless, only 57% of participants felt that this was a valid approach to determine the risk of disease progression. Qualitative feedback reflected some understanding of the need to rely on expert elicitation in the absence of "hard" data. We successfully elicited and pooled the beliefs of experts in melanoma regarding the probability of disease progression in a format suitable for inclusion in a decision-analytic model. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Can We Determine Sasang Constitutional Body Type Merely by Facial Inspection?

    PubMed

    Rhee, Seung Chul; Bae, Hyo-Sang; Lee, Yung-Seop; Hwang, Rahil

    2017-05-01

    This study aimed to assess the inter-observer concordance rate of anthroscopic examination on facial features among experts in Sasang constitutional medicine (SCM) in order to evaluate the presence of statistical differences in facial structural characteristics among different body types of Sasang constitution (SC), and to develop an objective method for facial analysis for diagnosing SC types to prevent SCM experts from misdiagnosis by their perceptional errors about faces. This was a double-blinded cross-sectional study conducted on 174 people's faces. Ten SCM experts participated in this study. Frontal and lateral photographs of subjects were standardized and displayed to 10 SCM experts for diagnosing the SC type by anthroscopic examination alone (experiment 1). The subjects' faces were analyzed by photogrammetric method to investigate the presence of any typical structural characteristics of the faces to differentiate SC type (experiment 2). Comparing subjects' SC type with anthroscopic diagnosis by 10 SCM experts, the inter-observer concordance rates were measured (experiment 1). Using photogrammetric facial analysis, a multinomial logistic model was made for analyzing the correlation of SC type and subjects' facial structural configuration (experiment 2). The inter-observer concordance rate of anthroscopic examination was 2.9% in experiment 1. Using a multinomial logistic fitting model, the predicted probability for determining SC type was 52.8-57.6% in experiment 2 (p < 0.05). Prototype composite faces were also created from photographs of subjects who received the same SC type from the SCM experts. As SC type cannot be precisely diagnosed using anthroscopic examination alone, SCM needs a definitive objective and scientific diagnosing method to be a scientifically verified alternative medicine and be globalized in future.

  3. Quality of life of patients from rural and urban areas in Poland with head and neck cancer treated with radiotherapy. A study of the influence of selected socio-demographic factors

    PubMed Central

    Jewczak, Maciej; Skura-Madziała, Anna

    2017-01-01

    Introduction The quality of life (QoL) experienced by cancer patients depends both on their state of health and on sociodemographic factors. Tumours in the head and neck region have a particularly adverse effect on patients psychologically and on their social functioning. Material and methods The study involved 121 patients receiving radiotherapy treatment for head and neck cancers. They included 72 urban and 49 rural residents. QoL was assessed using the questionnaires EORTC-QLQ-C30 and QLQ-H&N35. The data were analysed using statistical methods: a χ2 test for independence and a multinomial logit model. Results The evaluation of QoL showed a strong, statistically significant, positive dependence on state of health, and a weak dependence on sociodemographic factors and place of residence. Evaluations of financial situation and living conditions were similar for rural and urban residents. Patients from urban areas had the greatest anxiety about deterioration of their state of health. Rural respondents were more often anxious about a worsening of their financial situation, and expressed a fear of loneliness. Conclusions Studying the QoL of patients with head and neck cancer provides information concerning the areas in which the disease inhibits their lives, and the extent to which it does so. It indicates conditions for the adaptation of treatment and care methods in the healthcare system which might improve the QoL of such patients. A multinomial logit model identifies the factors determining the patients’ health assessment and defines the probable values of such assessment. PMID:29181080

  4. Frequencies of apolipoprotein E alleles in depressed patients undergoing hemodialysis--a case-control study.

    PubMed

    Su, Yan-yan; Zhang, Yun-fang; Yang, Shen; Wang, Jie-lin; Hua, Bao-jun; Luo, Jie; Wang, Qi; Zeng, De-wang; Lin, Yan-qun; Li, Hong-yan

    2015-06-01

    To explore the relation between the frequencies of apolipoprotein E (ApoE) alleles and the occurrence of depression in patients undergoing hemodialysis in a Chinese population. We examined the ApoE alleles in a sample of 288 subjects: 72 patients with depression under hemodialysis, 74 patients without depression under hemodialysis, 75 patients with depression under nondialytic treatment and 67 patients without depression under nondialytic treatment. The depression state was assessed using the Center for Epidemiological Studies Depression (CES-D) scale. Associations between the occurrence of depression and the frequencies of ApoE alleles were examined using multinomial logistic regression models with adjustment of relevant covariates. Information about sociodemographics, clinical data, vascular risk factors and cognitive function was also collected and evaluated. The frequencies of ApoE-ɛ2 were significantly different between depressed and non-depressed patients irrespective of dialysis (p < 0.05), but no significant difference was found in the frequencies of ApoE-ɛ4 (p > 0.05). Serum ApoE levels were significantly different between depressed and non-depressed patients in the whole sample (p < 0.05). Multinomial logistic regression models showed significant association between the frequency of ApoE-ɛ2 and the occurrence of depression in the Chinese population after control of relevant covariates, including age, sex, educational level, history of smoking and drinking, vascular risk factors and cognitive function. No association between the frequency of ApoE-ɛ4 and the occurrence of depression was found in patients undergoing hemodialysis. Further research is needed to find out if ApoE-ɛ2 acts as a protective factor in Chinese dialysis population since it might decrease the prevalence of depression and delay the onset age.

  5. Constipation and Incident CKD

    PubMed Central

    Sumida, Keiichi; Molnar, Miklos Z.; Potukuchi, Praveen K.; Thomas, Fridtjof; Lu, Jun Ling; Matsushita, Kunihiro; Yamagata, Kunihiro; Kalantar-Zadeh, Kamyar

    2017-01-01

    Constipation is one of the most prevalent conditions in primary care settings and increases the risk of cardiovascular disease, potentially through processes mediated by altered gut microbiota. However, little is known about the association of constipation with CKD. In a nationwide cohort of 3,504,732 United States veterans with an eGFR ≥60 ml/min per 1.73 m2, we examined the association of constipation status and severity (absent, mild, or moderate/severe), defined using diagnostic codes and laxative use, with incident CKD, incident ESRD, and change in eGFR in Cox models (for time-to-event analyses) and multinomial logistic regression models (for change in eGFR). Among patients, the mean (SD) age was 60.0 (14.1) years old; 93.2% of patients were men, and 24.7% were diabetic. After multivariable adjustments, compared with patients without constipation, patients with constipation had higher incidence rates of CKD (hazard ratio, 1.13; 95% confidence interval [95% CI], 1.11 to 1.14) and ESRD (hazard ratio, 1.09; 95% CI, 1.01 to 1.18) and faster eGFR decline (multinomial odds ratios for eGFR slope <−10, −10 to <−5, and −5 to <−1 versus −1 to <0 ml/min per 1.73 m2 per year, 1.17; 95% CI, 1.14 to 1.20; 1.07; 95% CI, 1.04 to 1.09; and 1.01; 95% CI, 1.00 to 1.03, respectively). More severe constipation associated with an incrementally higher risk for each renal outcome. In conclusion, constipation status and severity associate with higher risk of incident CKD and ESRD and with progressive eGFR decline, independent of known risk factors. Further studies should elucidate the underlying mechanisms. PMID:28122944

  6. LGALS4, CEACAM6, TSPAN8, and COL1A2: Blood Markers for Colorectal Cancer-Validation in a Cohort of Subjects With Positive Fecal Immunochemical Test Result.

    PubMed

    Rodia, Maria Teresa; Solmi, Rossella; Pasini, Francesco; Nardi, Elena; Mattei, Gabriella; Ugolini, Giampaolo; Ricciardiello, Luigi; Strippoli, Pierluigi; Miglio, Rossella; Lauriola, Mattia

    2018-06-01

    A noninvasive blood test for the early detection of colorectal cancer (CRC) is highly required. We evaluated a panel of 4 mRNAs as putative markers of CRC. We tested LGALS4, CEACAM6, TSPAN8, and COL1A2, referred to as the CELTiC panel, using quantitative reverse transcription polymerase chain reaction, on subjects with positive fecal immunochemical test (FIT) results and undergoing colonoscopy. Using a nonparametric test and multinomial logistic model, FIT-positive subjects were compared with CRC patients and healthy individuals. All the genes of the CELTiC panel displayed statistically significant differences between the healthy subjects (n = 67), both low-risk (n = 36) and high-risk/CRC (n = 92) subjects, and those in the negative-colonoscopy, FIT-positive group (n = 36). The multinomial logistic model revealed LGALS4 was the most powerful marker discriminating the 4 groups. When assessing the diagnostic values by analysis of the areas under the receiver operating characteristic curves (AUCs), the CELTiC panel reached an AUC of 0.91 (sensitivity, 79%; specificity, 94%) comparing normal subjects to low-risk subjects, and 0.88 (sensitivity, 75%; specificity, 87%) comparing normal and high-risk/CRC subjects. The comparison between the normal subjects and the negative-colonoscopy, FIT-positive group revealed an AUC of 0.93 (sensitivity, 82%; specificity, 97%). The CELTiC panel could represent a useful tool for discriminating subjects with positive FIT findings and for the early detection of precancerous adenomatous lesions and CRC. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A Walk (or Cycle) to the Park: Active Transit to Neighborhood Amenities, the CARDIA Study

    PubMed Central

    Boone-Heinonen, Janne; Jacobs, David R.; Sidney, Stephen; Sternfeld, Barbara; Lewis, Cora E.; Gordon-Larsen, Penny

    2009-01-01

    Background Building on known associations between active commuting and reduced cardiovascular disease (CVD) risk, this study examines active transit to neighborhood amenities and differences between walking versus cycling for transportation. Method Year 20 data from the Coronary Artery Risk Development in Young Adults (CARDIA) study (3549 black and white adults aged 38–50 years in 2005–06) were analyzed in 2008–2009. Sociodemographic correlates of transportation mode (car-only, walk-only, any cycling, other) to neighborhood amenities were examined in multivariable multinomial logistic models. Gender-stratified, multivariable linear or multinomial regression models compared CVD risk factors across transit modes. Results Active transit was most common to parks and public transit stops; walking was more common than cycling. Among those who used each amenity, active transit (walk-only and any cycling versus car-only transit) was more common in men and those with no live-in partner and less than full-time employment [significant OR's (95% CI) ranging from 1.56 (1.08, 2.27) to 4.52 (1.70, 12.14)], and less common in those with children. Active transit to any neighborhood amenity was associated with more favorable BMI, waist circumference, and fitness [largest coefficient (95% CI) −1.68 (−2.81, −0.55) for BMI, −3.41 (−5.71, −1.11) for waist circumference (cm), and 36.65 (17.99, 55.31) for treadmill test duration (sec)]. Only cycling was associated with lower lifetime CVD risk classification. Conclusion Active transit to neighborhood amenities was related to sociodemographics and CVD risk factors. Variation in health-related benefits by active transit mode, if validated in prospective studies, may have implications for transportation planning and research. PMID:19765499

  8. A cross-sectional study of the association of age, race and ethnicity, and body mass index with sex steroid hormone marker profiles among men in the National Health and Nutrition Examination Survey (NHANES III)

    PubMed Central

    Ritchey, Jamie; Karmaus, Wilfried; Sabo-Attwood, Tara; Steck, Susan E; Zhang, Hongmei

    2012-01-01

    Objectives Since sex hormone markers are metabolically linked, examining sex steroid hormones singly may account for inconsistent findings by age, race/ethnicity and body mass index (BMI) across studies. First, these markers were statistically combined into profiles to account for the metabolic relationship between markers. Then, the relationships between sex steroid hormone profiles and age, race/ethnicity and BMI were explored in multinomial logistic regression models. Design Cross-sectional survey. Setting The US Third National Health and Nutrition Examination Survey (NHANES III). Participants 1538 Men, >17 years. Primary outcome measure Sex hormone profiles. Results Cluster analysis was used to identify four statistically determined profiles with Blom-transformed T, E, sex hormone binding globulin (SHBG), and 3-α diol G. We used these four profiles with multinomial logistic regression models to examine differences by race/ethnicity, age and BMI. Mexican American men >50 years were associated with the profile that had lowest T, E and 3-α diol G levels compared to other profiles (p<0.05). Non-Hispanic Black, overweight (25–29.9 kg/m2) and obese (>30 kg/m2) men were most likely to be associated with the cluster with the lowest SHBG (p<0.05). Conclusion The associations of sex steroid hormone profiles by race/ethnicity are novel, while the findings by age and BMI groups are largely consistent with observations from single hormone studies. Future studies should validate these hormone profile groups and investigate these profiles in relation to chronic diseases and certain cancers. PMID:23043125

  9. Toxicity interactions between manganese (Mn) and lead (Pb) or cadmium (Cd) in a model organism the nematode C. elegans.

    PubMed

    Lu, Cailing; Svoboda, Kurt R; Lenz, Kade A; Pattison, Claire; Ma, Hongbo

    2018-06-01

    Manganese (Mn) is considered as an emerging metal contaminant in the environment. However, its potential interactions with companying toxic metals and the associated mixture effects are largely unknown. Here, we investigated the toxicity interactions between Mn and two commonly seen co-occurring toxic metals, Pb and Cd, in a model organism the nematode Caenorhabditis elegans. The acute lethal toxicity of mixtures of Mn+Pb and Mn+Cd were first assessed using a toxic unit model. Multiple toxicity endpoints including reproduction, lifespan, stress response, and neurotoxicity were then examined to evaluate the mixture effects at sublethal concentrations. Stress response was assessed using a daf-16::GFP transgenic strain that expresses GFP under the control of DAF-16 promotor. Neurotoxicity was assessed using a dat-1::GFP transgenic strain that expresses GFP in dopaminergic neurons. The mixture of Mn+Pb induced a more-than-additive (synergistic) lethal toxicity in the worm whereas the mixture of Mn+Cd induced a less-than-additive (antagonistic) toxicity. Mixture effects on sublethal toxicity showed more complex patterns and were dependent on the toxicity endpoints as well as the modes of toxic action of the metals. The mixture of Mn+Pb induced additive effects on both reproduction and lifespan, whereas the mixture of Mn+Cd induced additive effects on lifespan but not reproduction. Both mixtures seemed to induce additive effects on stress response and neurotoxicity, although a quantitative assessment was not possible due to the single concentrations used in mixture tests. Our findings demonstrate the complexity of metal interactions and the associated mixture effects. Assessment of metal mixture toxicity should take into consideration the unique property of individual metals, their potential toxicity mechanisms, and the toxicity endpoints examined.

  10. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    NASA Astrophysics Data System (ADS)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  11. Mixture IRT Model with a Higher-Order Structure for Latent Traits

    ERIC Educational Resources Information Center

    Huang, Hung-Yu

    2017-01-01

    Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…

  12. Beta Regression Finite Mixture Models of Polarization and Priming

    ERIC Educational Resources Information Center

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  13. Predicting mixture toxicity of seven phenolic compounds with similar and dissimilar action mechanisms to Vibrio qinghaiensis sp.nov.Q67.

    PubMed

    Huang, Wei Ying; Liu, Fei; Liu, Shu Shen; Ge, Hui Lin; Chen, Hong Han

    2011-09-01

    The predictions of mixture toxicity for chemicals are commonly based on two models: concentration addition (CA) and independent action (IA). Whether the CA and IA can predict mixture toxicity of phenolic compounds with similar and dissimilar action mechanisms was studied. The mixture toxicity was predicted on the basis of the concentration-response data of individual compounds. Test mixtures at different concentration ratios and concentration levels were designed using two methods. The results showed that the Weibull function fit well with the concentration-response data of all the components and their mixtures, with all relative coefficients (Rs) greater than 0.99 and root mean squared errors (RMSEs) less than 0.04. The predicted values from CA and IA models conformed to observed values of the mixtures. Therefore, it can be concluded that both CA and IA can predict reliable results for the mixture toxicity of the phenolic compounds with similar and dissimilar action mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Dental caries clusters among adolescents.

    PubMed

    Warren, John J; Van Buren, John M; Levy, Steven M; Marshall, Teresa A; Cavanaugh, Joseph E; Curtis, Alexandra M; Kolker, Justine L; Weber-Gasparoni, Karin

    2017-12-01

    There have been very few longitudinal studies of dental caries in adolescents, and little study of the caries risk factors in this age group. The purpose of this study was to describe different caries trajectories and associated risk factors among members of the Iowa Fluoride Study (IFS) cohort. The IFS recruited a birth cohort from 1992 to 1995, and has gathered dietary, fluoride and behavioural data at least twice yearly since recruitment. Examinations for dental caries were completed when participants were ages 5, 9, 13 and 17 years. For this study, only participants with decayed and filled surface (DFS) caries data at ages 9, 13 and 17 were included (N=396). The individual DFS counts at age 13 and the DFS increment from 13 to 17 were used to identify distinct caries trajectories using Ward's hierarchical clustering algorithm. A number of multinomial logistic regression models were developed to predict trajectory membership, using longitudinal dietary, fluoride and demographic/behavioural data from 9 to 17 years. Model selection was based on the akaike information criterion (AIC). Several different trajectory schemes were considered, and a three-trajectory scheme-no DFS at age 17 (n=142), low DFS (n=145) and high DFS (n=109)-was chosen to balance sample sizes and interpretability. The model selection process resulted in use of an arithmetic average for dietary variables across the period from 9 to 17 years. The multinomial logistic regression model with the best fit included the variables maternal education level, 100% juice consumption, brushing frequency and sex. Other favoured models also included water and milk consumption and home water fluoride concentration. The high caries cluster was most consistently associated with lower maternal education level, lower 100% juice consumption, lower brushing frequency and being female. The use of a clustering algorithm and use of Akaike's Information Criterion (AIC) to determine the best representation of the data were useful means in presenting longitudinal caries data. Findings suggest that high caries incidence in adolescence is associated with lower maternal educational level, less frequent tooth brushing, lower 100% juice consumption and being female. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  16. Existence, uniqueness and positivity of solutions for BGK models for mixtures

    NASA Astrophysics Data System (ADS)

    Klingenberg, C.; Pirner, M.

    2018-01-01

    We consider kinetic models for a multi component gas mixture without chemical reactions. In the literature, one can find two types of BGK models in order to describe gas mixtures. One type has a sum of BGK type interaction terms in the relaxation operator, for example the model described by Klingenberg, Pirner and Puppo [20] which contains well-known models of physicists and engineers for example Hamel [16] and Gross and Krook [15] as special cases. The other type contains only one collision term on the right-hand side, for example the well-known model of Andries, Aoki and Perthame [1]. For each of these two models [20] and [1], we prove existence, uniqueness and positivity of solutions in the first part of the paper. In the second part, we use the first model [20] in order to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described by Dellacherie [11].

  17. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    PubMed

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  19. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  1. Assessing coastal plain wetland composition using advanced spaceborne thermal emission and reflection radiometer imagery

    NASA Astrophysics Data System (ADS)

    Pantaleoni, Eva

    Establishing wetland gains and losses, delineating wetland boundaries, and determining their vegetative composition are major challenges that can be improved through remote sensing studies. We used the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) to separate wetlands from uplands in a study of 870 locations on the Virginia Coastal Plain. We used the first five bands from each of two ASTER scenes (6 March 2005 and 16 October 2005), covering the visible to the short-wave infrared region (0.52-2.185mum). We included GIS data layers for soil survey, topography, and presence or absence of water in a logistic regression model that predicted the location of over 78% of the wetlands. While this was slightly less accurate (78% vs. 86%) than current National Wetland Inventory (NWI) aerial photo interpretation procedures of locating wetlands, satellite imagery analysis holds great promise for speeding wetland mapping, lowering costs, and improving update frequency. To estimate wetland vegetation composition classes, we generated a classification and regression tree (CART) model and a multinomial logistic regression (logit) model, and compared their accuracy in separating woody wetlands, emergent wetlands and open water. The overall accuracy of the CART model was 73.3%, while for the logit model was 76.7%. The CART producer's accuracy of the emergent wetlands was higher than the accuracy from the multinomial logit (57.1% vs. 40.7%). However, we obtained the opposite result for the woody wetland category (68.7% vs. 52.6%). A McNemar test between the two models and NWI maps showed that their accuracies were not statistically different. We conducted a subpixel analysis of the ASTER images to estimate canopy cover of forested wetlands. We used top-of-atmosphere reflectance from the visible and near infrared bands, Delta Normalized Difference Vegetation Index, and a tasseled cap brightness, greenness, and wetness in linear regression model with canopy cover as the dependent variable. The model achieved an adjusted-R 2 of 0.69 (RMSE = 2.7%) for canopy cover less than 16%, and an adjusted-R 2 of 0.04 (RMSE = 19.8%) for higher canopy cover values. Taken together, these findings suggest that satellite remote sensing, in concert with other spatial data, has strong potential for mapping both wetland presence and type.

  2. Closed-form solutions in stress-driven two-phase integral elasticity for bending of functionally graded nano-beams

    NASA Astrophysics Data System (ADS)

    Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de

    2018-03-01

    Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.

  3. A modified procedure for mixture-model clustering of regional geochemical data

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David B.; Horton, John D.

    2014-01-01

    A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.

  4. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  5. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  6. Effects of road network on diversiform forest cover changes in the highest coverage region in China: An analysis of sampling strategies.

    PubMed

    Hu, Xisheng; Wu, Zhilong; Wu, Chengzhen; Ye, Limin; Lan, Chaofeng; Tang, Kun; Xu, Lu; Qiu, Rongzu

    2016-09-15

    Forest cover changes are of global concern due to their roles in global warming and biodiversity. However, many previous studies have ignored the fact that forest loss and forest gain are different processes that may respond to distinct factors by stressing forest loss more than gain or viewing forest cover change as a whole. It behooves us to carefully examine the patterns and drivers of the change by subdividing it into several categories. Our study includes areas of forest loss (4.8% of the study area), forest gain (1.3% of the study area) and forest loss and gain (2.0% of the study area) from 2000 to 2012 in Fujian Province, China. In the study area, approximately 65% and 90% of these changes occurred within 2000m of the nearest road and under road densities of 0.6km/km(2), respectively. We compared two sampling techniques (systematic sampling and random sampling) and four intensities for each technique to investigate the driving patterns underlying the changes using multinomial logistic regression. The results indicated the lack of pronounced differences in the regressions between the two sampling designs, although the sample size had a great impact on the regression outcome. The application of multi-model inference indicated that the low level road density had a negative significant association with forest loss and forest loss and gain, the expressway density had a positive significant impact on forest loss, and the road network was insignificantly related to forest gain. The model including socioeconomic and biophysical variables illuminated potentially different predictors of the different forest change categories. Moreover, the multiple comparisons tested by Fisher's least significant difference (LSD) were a good compensation for the multinomial logistic model to enrich the interpretation of the regression results. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Heterogeneous impact of the "Seguro Popular" program on the utilization of obstetrical services in Mexico, 2001-2006: a multinomial probit model with a discrete endogenous variable.

    PubMed

    Sosa-Rubí, Sandra G; Galárraga, Omar; Harris, Jeffrey E

    2009-01-01

    We evaluated the impact of Seguro Popular (SP), a program introduced in 2001 in Mexico primarily to finance health care for the poor. We focused on the effect of household enrollment in SP on pregnant women's access to obstetrical services, an important outcome measure of both maternal and infant health. We relied upon data from the cross-sectional 2006 National Health and Nutrition Survey (ENSANUT) in Mexico. We analyzed the responses of 3890 women who delivered babies during 2001-2006 and whose households lacked employer-based health care coverage. We formulated a multinomial probit model that distinguished between three mutually exclusive sites for delivering a baby: a health unit specifically accredited by SP; a non-SP-accredited clinic run by the Department of Health (Secretaría de Salud, or SSA); and private obstetrical care. Our model accounted for the endogeneity of the household's binary decision to enroll in the SP program. Women in households that participated in the SP program had a much stronger preference for having a baby in a SP-sponsored unit rather than paying out of pocket for a private delivery. At the same time, participation in SP was associated with a stronger preference for delivering in the private sector rather than at a state-run SSA clinic. On balance, the Seguro Popular program reduced pregnant women's attendance at an SSA clinic much more than it reduced the probability of delivering a baby in the private sector. The quantitative impact of the SP program varied with the woman's education and health, as well as the assets and location (rural vs. urban) of the household. The SP program had a robust, significantly positive impact on access to obstetrical services. Our finding that women enrolled in SP switched from non-SP state-run facilities, rather than from out-of-pocket private services, is important for public policy and requires further exploration.

  8. Fluent, fast, and frugal? A formal model evaluation of the interplay between memory, fluency, and comparative judgments.

    PubMed

    Hilbig, Benjamin E; Erdfelder, Edgar; Pohl, Rüdiger F

    2011-07-01

    A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency-that is, the speed with which objects are recognized-will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely untested due to methodological difficulties. To overcome the latter, we propose a measurement model from the class of multinomial processing tree models that can estimate true single-cue reliance on recognition and retrieval fluency. We applied this model to aggregate and individual data from a probabilistic inference experiment and considered both goodness of fit and model complexity to evaluate different hypotheses. The results were relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing comparative judgments concerning recognized objects. These findings are discussed in light of a broader theoretical view on the interplay of memory and judgment processes.

  9. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  10. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  11. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  12. Neurotoxicological and statistical analyses of a mixture of five organophosphorus pesticides using a ray design.

    PubMed

    Moser, V C; Casey, M; Hamm, A; Carter, W H; Simmons, J E; Gennings, C

    2005-07-01

    Environmental exposures generally involve chemical mixtures instead of single chemicals. Statistical models such as the fixed-ratio ray design, wherein the mixing ratio (proportions) of the chemicals is fixed across increasing mixture doses, allows for the detection and characterization of interactions among the chemicals. In this study, we tested for interaction(s) in a mixture of five organophosphorus (OP) pesticides (chlorpyrifos, diazinon, dimethoate, acephate, and malathion). The ratio of the five pesticides (full ray) reflected the relative dietary exposure estimates of the general population as projected by the US EPA Dietary Exposure Evaluation Model (DEEM). A second mixture was tested using the same dose levels of all pesticides, but excluding malathion (reduced ray). The experimental approach first required characterization of dose-response curves for the individual OPs to build a dose-additivity model. A series of behavioral measures were evaluated in adult male Long-Evans rats at the time of peak effect following a single oral dose, and then tissues were collected for measurement of cholinesterase (ChE) activity. Neurochemical (blood and brain cholinesterase [ChE] activity) and behavioral (motor activity, gait score, tail-pinch response score) endpoints were evaluated statistically for evidence of additivity. The additivity model constructed from the single chemical data was used to predict the effects of the pesticide mixture along the full ray (10-450 mg/kg) and the reduced ray (1.75-78.8 mg/kg). The experimental mixture data were also modeled and statistically compared to the additivity models. Analysis of the 5-OP mixture (the full ray) revealed significant deviation from additivity for all endpoints except tail-pinch response. Greater-than-additive responses (synergism) were observed at the lower doses of the 5-OP mixture, which contained non-effective dose levels of each of the components. The predicted effective doses (ED20, ED50) were about half that predicted by additivity, and for brain ChE and motor activity, there was a threshold shift in the dose-response curves. For the brain ChE and motor activity, there was no difference between the full (5-OP mixture) and reduced (4-OP mixture) rays, indicating that malathion did not influence the non-additivity. While the reduced ray for blood ChE showed greater deviation from additivity without malathion in the mixture, the non-additivity observed for the gait score was reversed when malathion was removed. Thus, greater-than-additive interactions were detected for both the full and reduced ray mixtures, and the role of malathion in the interactions varied depending on the endpoint. In all cases, the deviations from additivity occurred at the lower end of the dose-response curves.

  13. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  14. Numerical study of underwater dispersion of dilute and dense sediment-water mixtures

    NASA Astrophysics Data System (ADS)

    Chan, Ziying; Dao, Ho-Minh; Tan, Danielle S.

    2018-05-01

    As part of the nodule-harvesting process, sediment tailings are released underwater. Due to the long period of clouding in the water during the settling process, this presents a significant environmental and ecological concern. One possible solution is to release a mixture of sediment tailings and seawater, with the aim of reducing the settling duration as well as the amount of spreading. In this paper, we present some results of numerical simulations using the smoothed particle hydrodynamics (SPH) method to model the release of a fixed volume of pre-mixed sediment-water mixture into a larger body of quiescent water. Both the sediment-water mixture and the “clean” water are modeled as two different fluids, with concentration-dependent bulk properties of the sediment-water mixture adjusted according to the initial solids concentration. This numerical model was validated in a previous study, which indicated significant differences in the dispersion and settling process between dilute and dense mixtures, and that a dense mixture may be preferable. For this study, we investigate a wider range of volumetric concentration with the aim of determining the optimum volumetric concentration, as well as its overall effectiveness compared to the original process (100% sediment).

  15. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection.

    PubMed

    Surian, Didi; Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-08-29

    In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines.

  16. Characterizing Twitter Discussions About HPV Vaccines Using Topic Modeling and Community Detection

    PubMed Central

    Nguyen, Dat Quoc; Kennedy, Georgina; Johnson, Mark; Coiera, Enrico; Dunn, Adam G

    2016-01-01

    Background In public health surveillance, measuring how information enters and spreads through online communities may help us understand geographical variation in decision making associated with poor health outcomes. Objective Our aim was to evaluate the use of community structure and topic modeling methods as a process for characterizing the clustering of opinions about human papillomavirus (HPV) vaccines on Twitter. Methods The study examined Twitter posts (tweets) collected between October 2013 and October 2015 about HPV vaccines. We tested Latent Dirichlet Allocation and Dirichlet Multinomial Mixture (DMM) models for inferring topics associated with tweets, and community agglomeration (Louvain) and the encoding of random walks (Infomap) methods to detect community structure of the users from their social connections. We examined the alignment between community structure and topics using several common clustering alignment measures and introduced a statistical measure of alignment based on the concentration of specific topics within a small number of communities. Visualizations of the topics and the alignment between topics and communities are presented to support the interpretation of the results in context of public health communication and identification of communities at risk of rejecting the safety and efficacy of HPV vaccines. Results We analyzed 285,417 Twitter posts (tweets) about HPV vaccines from 101,519 users connected by 4,387,524 social connections. Examining the alignment between the community structure and the topics of tweets, the results indicated that the Louvain community detection algorithm together with DMM produced consistently higher alignment values and that alignments were generally higher when the number of topics was lower. After applying the Louvain method and DMM with 30 topics and grouping semantically similar topics in a hierarchy, we characterized 163,148 (57.16%) tweets as evidence and advocacy, and 6244 (2.19%) tweets describing personal experiences. Among the 4548 users who posted experiential tweets, 3449 users (75.84%) were found in communities where the majority of tweets were about evidence and advocacy. Conclusions The use of community detection in concert with topic modeling appears to be a useful way to characterize Twitter communities for the purpose of opinion surveillance in public health applications. Our approach may help identify online communities at risk of being influenced by negative opinions about public health interventions such as HPV vaccines. PMID:27573910

  17. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Impact of chemical proportions on the acute neurotoxicity of a mixture of seven carbamates in preweanling and adult rats.

    PubMed

    Moser, Virginia C; Padilla, Stephanie; Simmons, Jane Ellen; Haber, Lynne T; Hertzberg, Richard C

    2012-09-01

    Statistical design and environmental relevance are important aspects of studies of chemical mixtures, such as pesticides. We used a dose-additivity model to test experimentally the default assumptions of dose additivity for two mixtures of seven N-methylcarbamates (carbaryl, carbofuran, formetanate, methomyl, methiocarb, oxamyl, and propoxur). The best-fitting models were selected for the single-chemical dose-response data and used to develop a combined prediction model, which was then compared with the experimental mixture data. We evaluated behavioral (motor activity) and cholinesterase (ChE)-inhibitory (brain, red blood cells) outcomes at the time of peak acute effects following oral gavage in adult and preweanling (17 days old) Long-Evans male rats. The mixtures varied only in their mixing ratios. In the relative potency mixture, proportions of each carbamate were set at equitoxic component doses. A California environmental mixture was based on the 2005 sales of each carbamate in California. In adult rats, the relative potency mixture showed dose additivity for red blood cell ChE and motor activity, and brain ChE inhibition showed a modest greater-than additive (synergistic) response, but only at a middle dose. In rat pups, the relative potency mixture was either dose-additive (brain ChE inhibition, motor activity) or slightly less-than additive (red blood cell ChE inhibition). On the other hand, at both ages, the environmental mixture showed greater-than additive responses on all three endpoints, with significant deviations from predicted at most to all doses tested. Thus, we observed different interactive properties for different mixing ratios of these chemicals. These approaches for studying pesticide mixtures can improve evaluations of potential toxicity under varying experimental conditions that may mimic human exposures.

  19. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    ERIC Educational Resources Information Center

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  20. A crash-prediction model for multilane roads.

    PubMed

    Caliendo, Ciro; Guida, Maurizio; Parisi, Alessandra

    2007-07-01

    Considerable research has been carried out in recent years to establish relationships between crashes and traffic flow, geometric infrastructure characteristics and environmental factors for two-lane rural roads. Crash-prediction models focused on multilane rural roads, however, have rarely been investigated. In addition, most research has paid but little attention to the safety effects of variables such as stopping sight distance and pavement surface characteristics. Moreover, the statistical approaches have generally included Poisson and Negative Binomial regression models, whilst Negative Multinomial regression model has been used to a lesser extent. Finally, as far as the authors are aware, prediction models involving all the above-mentioned factors have still not been developed in Italy for multilane roads, such as motorways. Thus, in this paper crash-prediction models for a four-lane median-divided Italian motorway were set up on the basis of accident data observed during a 5-year monitoring period extending between 1999 and 2003. The Poisson, Negative Binomial and Negative Multinomial regression models, applied separately to tangents and curves, were used to model the frequency of accident occurrence. Model parameters were estimated by the Maximum Likelihood Method, and the Generalized Likelihood Ratio Test was applied to detect the significant variables to be included in the model equation. Goodness-of-fit was measured by means of both the explained fraction of total variation and the explained fraction of systematic variation. The Cumulative Residuals Method was also used to test the adequacy of a regression model throughout the range of each variable. The candidate set of explanatory variables was: length (L), curvature (1/R), annual average daily traffic (AADT), sight distance (SD), side friction coefficient (SFC), longitudinal slope (LS) and the presence of a junction (J). Separate prediction models for total crashes and for fatal and injury crashes only were considered. For curves it is shown that significant variables are L, 1/R and AADT, whereas for tangents they are L, AADT and junctions. The effect of rain precipitation was analysed on the basis of hourly rainfall data and assumptions about drying time. It is shown that a wet pavement significantly increases the number of crashes. The models developed in this paper for Italian motorways appear to be useful for many applications such as the detection of critical factors, the estimation of accident reduction due to infrastructure and pavement improvement, and the predictions of accidents counts when comparing different design options. Thus this research may represent a point of reference for engineers in adjusting or designing multilane roads.

  1. Aging, subjective experience, and cognitive control: dramatic false remembering by older adults.

    PubMed

    Jacoby, Larry L; Bishara, Anthony J; Hessels, Sandra; Toth, Jeffrey P

    2005-05-01

    Recent research suggests that older adults are more susceptible to interference effects than are young adults; however, that research has failed to equate differences in original learning. In 4 experiments, the authors show that older adults are more susceptible to interference effects produced by a misleading prime. Even when original learning was equated, older adults were 10 times as likely to falsely remember misleading information and were much less likely to increase their accuracy by opting not to answer under conditions of free responding. The results are well described by a multinomial model that postulates multiple modes of cognitive control. According to that model, older adults are likely to be captured by misleading information, a form of goal neglect or deficit in inhibitory functions. Copyright 2005 APA, all rights reserved.

  2. Dielectric relaxation and hydrogen bonding interaction in xylitol-water mixtures using time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Rander, D. N.; Joshi, Y. S.; Kanse, K. S.; Kumbharkhane, A. C.

    2016-01-01

    The measurements of complex dielectric permittivity of xylitol-water mixtures have been carried out in the frequency range of 10 MHz-30 GHz using a time domain reflectometry technique. Measurements have been done at six temperatures from 0 to 25 °C and at different weight fractions of xylitol (0 < W X ≤ 0.7) in water. There are different models to explain the dielectric relaxation behaviour of binary mixtures, such as Debye, Cole-Cole or Cole-Davidson model. We have observed that the dielectric relaxation behaviour of binary mixtures of xylitol-water can be well described by Cole-Davidson model having an asymmetric distribution of relaxation times. The dielectric parameters such as static dielectric constant and relaxation time for the mixtures have been evaluated. The molecular interaction between xylitol and water molecules is discussed using the Kirkwood correlation factor ( g eff ) and thermodynamic parameter.

  3. Factors Associated with Adoption and Adoption Intentions of Nonparental Caregivers

    PubMed Central

    Bramlett, Matthew D.; Radel, Laura F.

    2016-01-01

    Data from the 2011–2012 National Survey of Children’s Health and the 2013 National Survey of Children in Nonparental Care were used to fit a multinomial logistic model comparing three groups to those who never considered adoption: those who ever considered, but are not currently planning adoption; those planning adoption; and those who adopted. Adoption may be more likely when the caregiver is a nonkin foster parent, a foster care agency was involved, and/or financial assistance is available. Those with plans to adopt but who have not adopted may face adoption barriers such as extreme poverty, lower education and being unmarried. PMID:26949328

  4. Social factors, weight perception, and weight control practices among adolescents in Mexico.

    PubMed

    Bojorquez, Ietza; Villatoro, Jorge; Delgadillo, Marlene; Fleiz, Clara; Fregoso, Diana; Unikel, Claudia

    2018-06-01

    We evaluated the association of social factors and weight control practices in adolescents, and the mediation of this association by weight perception, in a national survey of students in Mexico ( n = 28,266). We employed multinomial and Poisson regression models and Sobel's test to assess mediation. Students whose mothers had a higher level of education were more likely to perceive themselves as overweight and also to engage in weight control practices. After adjusting for body weight perception, the effect of maternal education on weight control practices remained significant. Mediation tests were significant for boys and non-significant for girls.

  5. Choice of contracts in the British National Health Service: an empirical study.

    PubMed

    Chalkley, Martin; McVicar, Duncan

    2008-09-01

    Following major reforms of the British National Health Service (NHS) in 1990, the roles of purchasing and providing health services were separated, with the relationship between purchasers and providers governed by contracts. Using a mixed multinomial logit analysis, we show how this policy shift led to a selection of contracts that is consistent with the predictions of a simple model, based on contract theory, in which the characteristics of the health services being purchased and of the contracting parties influence the choice of contract form. The paper thus provides evidence in support of the practical relevance of theory in understanding health care market reform.

  6. Broad Feshbach resonance in the 6Li-40K mixture.

    PubMed

    Tiecke, T G; Goosen, M R; Ludewig, A; Gensemer, S D; Kraft, S; Kokkelmans, S J J M F; Walraven, J T M

    2010-02-05

    We study the widths of interspecies Feshbach resonances in a mixture of the fermionic quantum gases 6Li and 40K. We develop a model to calculate the width and position of all available Feshbach resonances for a system. Using the model, we select the optimal resonance to study the {6}Li/{40}K mixture. Experimentally, we obtain the asymmetric Fano line shape of the interspecies elastic cross section by measuring the distillation rate of 6Li atoms from a potassium-rich 6Li/{40}K mixture as a function of magnetic field. This provides us with the first experimental determination of the width of a resonance in this mixture, DeltaB=1.5(5) G. Our results offer good perspectives for the observation of universal crossover physics using this mass-imbalanced fermionic mixture.

  7. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Assessment of the Risks of Mixtures of Major Use Veterinary Antibiotics in European Surface Waters.

    PubMed

    Guo, Jiahua; Selby, Katherine; Boxall, Alistair B A

    2016-08-02

    Effects of single veterinary antibiotics on a range of aquatic organisms have been explored in many studies. In reality, surface waters will be exposed to mixtures of these substances. In this study, we present an approach for establishing risks of antibiotic mixtures to surface waters and illustrate this by assessing risks of mixtures of three major use antibiotics (trimethoprim, tylosin, and lincomycin) to algal and cyanobacterial species in European surface waters. Ecotoxicity tests were initially performed to assess the combined effects of the antibiotics to the cyanobacteria Anabaena flos-aquae. The results were used to evaluate two mixture prediction models: concentration addition (CA) and independent action (IA). The CA model performed best at predicting the toxicity of the mixture with the experimental 96 h EC50 for the antibiotic mixture being 0.248 μmol/L compared to the CA predicted EC50 of 0.21 μmol/L. The CA model was therefore used alongside predictions of exposure for different European scenarios and estimations of hazards obtained from species sensitivity distributions to estimate risks of mixtures of the three antibiotics. Risk quotients for the different scenarios ranged from 0.066 to 385 indicating that the combination of three substances could be causing adverse impacts on algal communities in European surface waters. This could have important implications for primary production and nutrient cycling. Tylosin contributed most to the risk followed by lincomycin and trimethoprim. While we have explored only three antibiotics, the combined experimental and modeling approach could readily be applied to the wider range of antibiotics that are in use.

  9. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  10. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  11. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  12. Activities of mixtures of soil-applied herbicides with different molecular targets.

    PubMed

    Kaushik, Shalini; Streibig, Jens Carl; Cedergreen, Nina

    2006-11-01

    The joint action of soil-applied herbicide mixtures with similar or different modes of action has been assessed by using the additive dose model (ADM). The herbicides chlorsulfuron, metsulfuron-methyl, pendimethalin and pretilachlor, applied either singly or in binary mixtures, were used on rice (Oryza sativa L.). The growth (shoot) response curves were described by a logistic dose-response model. The ED50 values and their corresponding standard errors obtained from the response curves were used to test statistically if the shape of the isoboles differed from the reference model (ADM). Results showed that mixtures of herbicides with similar molecular targets, i.e. chlorsulfuron and metsulfuron (acetolactate synthase (ALS) inhibitors), and with different molecular targets, i.e. pendimethalin (microtubule assembly inhibitor) and pretilachlor (very long chain fatty acids (VLCFAs) inhibitor), followed the ADM. Mixing herbicides with different molecular targets gave different results depending on whether pretilachlor or pendimethalin was involved. In general, mixtures of pretilachlor and sulfonylureas showed synergistic interactions, whereas mixtures of pendimethalin and sulfonylureas exhibited either antagonistic or additive activities. Hence, there is a large potential for both increasing the specificity of herbicides by using mixtures and lowering the total dose for weed control, while at the same time delaying the development of herbicide resistance by using mixtures with different molecular targets. Copyright (c) 2006 Society of Chemical Industry.

  13. Social influence, agent heterogeneity and the emergence of the urban informal sector

    NASA Astrophysics Data System (ADS)

    García-Díaz, César; Moreno-Monroy, Ana I.

    2012-02-01

    We develop an agent-based computational model in which the urban informal sector acts as a buffer where rural migrants can earn some income while queuing for higher paying modern-sector jobs. In the model, the informal sector emerges as a result of rural-urban migration decisions of heterogeneous agents subject to social influence in the form of neighboring effects of varying strengths. Besides using a multinomial logit choice model that allows for agent idiosyncrasy, explicit agent heterogeneity is introduced in the form of socio-demographic characteristics preferred by modern-sector employers. We find that different combinations of the strength of social influence and the socio-economic composition of the workforce lead to very different urbanization and urban informal sector shares. In particular, moderate levels of social influence and a large proportion of rural inhabitants with preferred socio-demographic characteristics are conducive to a higher urbanization rate and a larger informal sector.

  14. Positive and negative generation effects in source monitoring.

    PubMed

    Riefer, David M; Chien, Yuchin; Reimer, Jason F

    2007-10-01

    Research is mixed as to whether self-generation improves memory for the source of information. We propose the hypothesis that positive generation effects (better source memory for self-generated information) occur in reality-monitoring paradigms, while negative generation effects (better source memory for externally presented information) tend to occur in external source-monitoring paradigms. This hypothesis was tested in an experiment in which participants read or generated words, followed by a memory test for the source of each word (read or generated) and the word's colour. Meiser and Bröder's (2002) multinomial model for crossed source dimensions was used to analyse the data, showing that source memory for generation (reality monitoring) was superior for the generated words, while source memory for word colour (external source monitoring) was superior for the read words. The model also revealed the influence of strong response biases in the data, demonstrating the usefulness of formal modelling when examining generation effects in source monitoring.

  15. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  16. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    ERIC Educational Resources Information Center

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  17. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  18. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  19. Structure investigations on assembled astaxanthin molecules

    NASA Astrophysics Data System (ADS)

    Köpsel, Christian; Möltgen, Holger; Schuch, Horst; Auweter, Helmut; Kleinermanns, Karl; Martin, Hans-Dieter; Bettermann, Hans

    2005-08-01

    The carotenoid r,r-astaxanthin (3R,3‧R-dihydroxy-4,4‧-diketo-β-carotene) forms different types of aggregates in acetone-water mixtures. H-type aggregates were found in mixtures with a high part of water (e.g. 1:9 acetone-water mixture) whereas two different types of J-aggregates were identified in mixtures with a lower part of water (3:7 acetone-water mixture). These aggregates were characterized by recording UV/vis-absorption spectra, CD-spectra and fluorescence emissions. The sizes of the molecular assemblies were determined by dynamic light scattering experiments. The hydrodynamic diameter of the assemblies amounts 40 nm in 1:9 acetone-water mixtures and exceeds up to 1 μm in 3:7 acetone-water mixtures. Scanning tunneling microscopy monitored astaxanthin aggregates on graphite surfaces. The structure of the H-aggregate was obtained by molecular modeling calculations. The structure was confirmed by calculating the electronic absorption spectrum and the CD-spectrum where the molecular modeling structure was used as input.

  20. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  1. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  2. Compact determination of hydrogen isotopes

    DOE PAGES

    Robinson, David

    2017-04-06

    Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. Furthermore, the model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under so memore » conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.« less

  3. Multinomial modeling and an evaluation of common data-mining algorithms for identifying signals of disproportionate reporting in pharmacovigilance databases.

    PubMed

    Johnson, Kjell; Guo, Cen; Gosink, Mark; Wang, Vicky; Hauben, Manfred

    2012-12-01

    A principal objective of pharmacovigilance is to detect adverse drug reactions that are unknown or novel in terms of their clinical severity or frequency. One method is through inspection of spontaneous reporting system databases, which consist of millions of reports of patients experiencing adverse effects while taking one or more drugs. For such large databases, there is an increasing need for quantitative and automated screening tools to assist drug safety professionals in identifying drug-event combinations (DECs) worthy of further investigation. Existing algorithms can effectively identify problematic DECs when the frequencies are high. However these algorithms perform differently for low-frequency DECs. In this work, we provide a method based on the multinomial distribution that identifies signals of disproportionate reporting, especially for low-frequency combinations. In addition, we comprehensively compare the performance of commonly used algorithms with the new approach. Simulation results demonstrate the advantages of the proposed method, and analysis of the Adverse Event Reporting System data shows that the proposed method can help detect interesting signals. Furthermore, we suggest that these methods be used to identify DECs that occur significantly less frequently than expected, thus identifying potential alternative indications for these drugs. We provide an empirical example that demonstrates the importance of exploring underexpected DECs. Code to implement the proposed method is available in R on request from the corresponding authors. kjell@arboranalytics.com or Mark.M.Gosink@Pfizer.com Supplementary data are available at Bioinformatics online.

  4. Public funding of pharmaceuticals in The Netherlands: investigating the effect of evidence, process and context on CVZ decision-making.

    PubMed

    Cerri, Karin H; Knapp, Martin; Fernandez, Jose-Luis

    2014-09-01

    The College Voor Zorgverzekeringen (CVZ) provides guidance to the Dutch healthcare system on funding and use of new pharmaceutical technologies. This study examined the impact of evidence, process and context factors on CVZ decisions in 2004-2009. A data set of CVZ decisions pertaining to pharmaceutical technologies was created, including 29 variables extracted from published information. A three-category outcome variable was used, defined as the decision to 'recommend', 'restrict' or 'not recommend' a technology. Technologies included in list 1A/1B or on the expensive drug list were considered recommended; those included in list 2 or for which patient co-payment is required were considered restricted; technologies not included on any reimbursement list were classified as 'not recommended'. Using multinomial logistic regression, the relative contribution of explanatory variables on CVZ decisions was assessed. In all, 244 technology appraisals (256 technologies) were analysed, with 51%, of technologies recommended, 33% restricted and 16% not recommended by CVZ for funding. The multinomial model showed significant associations (p ≤ 0.10) between CVZ outcome and several variables, including: (1) use of an active comparator and demonstration of statistical superiority of the primary endpoint in clinical trials, (2) pharmaceutical budget impact associated with introduction of the technology, (3) therapeutic indication and (4) prevalence of the target population. Results confirm the value of a comprehensive and multivariate approach to understanding CVZ decision-making.

  5. The status of diabetes control in Kurdistan province, west of Iran.

    PubMed

    Esmailnasab, Nader; Afkhamzadeh, Abdorrahim; Roshani, Daem; Moradi, Ghobad

    2013-09-17

    Based on some estimation more than two million peoples in Iran are affected by Type 2 diabetes. The present study was designed to evaluate the status of diabetes control among Type 2 diabetes patients in Kurdistan, west of Iran and its associated factors. In our cross sectional study conducted in 2010, 411 Type 2 diabetes patients were randomly recruited from Sanandaj, Capital of Kurdistan. Chi square test was used in univariate analysis to address the association between HgAlc and FBS status and other variables. The significant results from Univariate analysis were entered in multivariate analysis and multinomial logistic regression model. In 38% of patients, FBS was in normal range (70-130) and in 47% HgA1c was <7% which is normal range for HgA1c. In univariate analysis, FBS level was associated with educational levels (P=0.001), referral style (P=0.001), referral time (P=0.009), and insulin injection (P=0.016). In addition, HgA1c had a relationship with sex (P=0.023), age (P=0.035), education (P=0.001), referral style (P=0.001), and insulin injection (P=0.008). After using multinomial logistic regression for significant results of univariate analysis, it was found that FBS was significantly associated with referral style. In addition HgA1c was significantly associated with referral style and Insulin injection. Although some of patients were under the coverage of specialized cares, but their diabetes were not properly controlled.

  6. Shared clinical decision making

    PubMed Central

    AlHaqwi, Ali I.; AlDrees, Turki M.; AlRumayyan, Ahmad; AlFarhan, Ali I.; Alotaibi, Sultan S.; AlKhashan, Hesham I.; Badri, Motasim

    2015-01-01

    Objectives: To determine preferences of patients regarding their involvement in the clinical decision making process and the related factors in Saudi Arabia. Methods: This cross-sectional study was conducted in a major family practice center in King Abdulaziz Medical City, Riyadh, Saudi Arabia, between March and May 2012. Multivariate multinomial regression models were fitted to identify factors associated with patients preferences. Results: The study included 236 participants. The most preferred decision-making style was shared decision-making (57%), followed by paternalistic (28%), and informed consumerism (14%). The preference for shared clinical decision making was significantly higher among male patients and those with higher level of education, whereas paternalism was significantly higher among older patients and those with chronic health conditions, and consumerism was significantly higher in younger age groups. In multivariate multinomial regression analysis, compared with the shared group, the consumerism group were more likely to be female [adjusted odds ratio (AOR) =2.87, 95% confidence interval [CI] 1.31-6.27, p=0.008] and non-dyslipidemic (AOR=2.90, 95% CI: 1.03-8.09, p=0.04), and the paternalism group were more likely to be older (AOR=1.03, 95% CI: 1.01-1.05, p=0.04), and female (AOR=2.47, 95% CI: 1.32-4.06, p=0.008). Conclusion: Preferences of patients for involvement in the clinical decision-making varied considerably. In our setting, underlying factors that influence these preferences identified in this study should be considered and tailored individually to achieve optimal treatment outcomes. PMID:26620990

  7. Association between employee benefits and frailty in community-dwelling older adults.

    PubMed

    Avila-Funes, José Alberto; Paniagua-Santos, Diana Leticia; Escobar-Rivera, Vicente; Navarrete-Reyes, Ana Patricia; Aguilar-Navarro, Sara; Amieva, Hélène

    2016-05-01

    The phenotype of frailty has been associated with an increased vulnerability for the development of adverse health-related outcomes. The origin of frailty is multifactorial and financial issues could be implicated, as they have been associated with health status, well-being and mortality. However, the association between economic benefits and frailty has been poorly explored. Therefore, the objective was to determine the association between employee benefits and frailty. A cross-sectional study of 927 community-dwelling older adults aged 70 years and older participating in the Mexican Study of Nutritional and Psychosocial Markers of Frailty was carried out. Employee benefits were established according to eight characteristics: bonus, profit sharing, pension, health insurance, food stamps, housing credit, life insurance, and Christmas bonus. Frailty was defined according to a slightly modified version of the phenotype proposed by Fried et al. Multinomial logistic regression models were run to determine the association between employee benefits and frailty adjusting by sociodemographic and health covariates. The prevalence of frailty was 14.1%, and 4.4% of participants rated their health status as "poor." Multinomial logistic regression analyses showed that employee benefits were statistically and independently associated with the frail subgroup (OR 0.85; 95% CI 0.74-0.98; P = 0.027) even after adjusting for potential confounders. Fewer employee benefits are associated with frailty. Supporting spreading employee benefits for older people could have a positive impact on the development of frailty and its consequences. Geriatr Gerontol Int 2016; 16: 606-611. © 2015 Japan Geriatrics Society.

  8. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  9. Age-specific population frequencies of amyloidosis and neurodegeneration among cognitively normal people age 50-89 years: a cross-sectional study

    PubMed Central

    Jack, Clifford R.; Wiste, Heather J.; Weigand, Stephen D.; Rocca, Walter A.; Knopman, David S.; Mielke, Michelle M.; Lowe, Val J.; Senjem, Matthew L.; Gunter, Jeffrey L.; Preboske, Gregory M.; Pankratz, Vernon S.; Vemuri, Prashanthi; Petersen, Ronald C.

    2015-01-01

    Summary Background As treatment of pre-clinical Alzheimer's disease (AD) becomes a focus of therapeutic intervention, observational research studies should recognize the overlap between imaging abnormalities associated with typical aging vs those associated with AD. Our objective was to characterize how typical aging and pre-clinical AD blend together with advancing age in terms of neurodegeneration and b-amyloidosis. Methods We measured age-specific frequencies of amyloidosis and neurodegeneration in 985 cognitively normal subjects age 50 to 89 from a population-based study of cognitive aging. Potential participants were randomly selected from the Olmsted County, Minnesota population by age- and sex-stratification and invited to participate in cognitive evaluations and undergo multimodality imaging. To be eligible for inclusion, subjects must have been judged clinically to have no cognitive impairment and have undergone amyloid PET, FDG PET and MRI. Imaging studies were obtained from March 2006 to December 2013. Amyloid positive/negative status (A+/A−) was determined by amyloid PET using Pittsburgh Compound B. Neurodegeneration positive/negative status (N+/N−) was determined by an AD-signature FDG PET measure and/or hippocampal volume on MRI. We labeled subjects positive or negative for neurodegeneration (FDG PET or MRI) or amyloidosis by using cutpoints defined such that 90% of 75 clinically diagnosed AD dementia subjects were categorized as abnormal. APOE genotype was assessed using DNA extracted from blood. Every individual was assigned to one of four groups: A−N−, A+N−, A−N+, or A+N+. Age specific frequencies of the 4 A/N groups were determined cross-sectionally using multinomial regression models. Associations with APOE ε4 and sex effects were evaluated by including these covariates in the multinomial models. Findings The population frequency of A−N− was 100% (n=985) at age 50 and declined thereafter. The frequency of A+N− increased to a maximum of 28% (95% CI, 24%-32%) at age 74 then decreased to 17% (95% CI, 11%-25%) by age 89. A−N+ increased from age 60 onward reaching a frequency of 24% (95% CI, 16%-34%) by age 89. A+N+ increased from age 65 onward reaching a frequency of 42% (95% CI, 31%-52%) by age 89. A+N− and A+N+ were more frequent in APOE ε4 carriers. A+N+ was more, and A+N− less frequent in men. Interpretation Accumulation of A/N imaging abnormalities is nearly inevitable by old age yet people are able to remain cognitively normal despite these abnormalities. . The multinomial models suggest the A/N frequency trends by age are modified by APOE ε4 , which increases risk for amyloidosis, and male sex, which increases risk for neurodegeneration. Changing A/N frequencies with age suggest that individuals may follow different pathophysiological sequences. Funding National Institute on Aging; Alexander Family Professorship of Alzheimer's Disease Research. PMID:25201514

  10. Comparing performance of multinomial logistic regression and discriminant analysis for monitoring access to care for acute myocardial infarction.

    PubMed

    Hossain, Monir; Wright, Steven; Petersen, Laura A

    2002-04-01

    One way to monitor patient access to emergent health care services is to use patient characteristics to predict arrival time at the hospital after onset of symptoms. This predicted arrival time can then be compared with actual arrival time to allow monitoring of access to services. Predicted arrival time could also be used to estimate potential effects of changes in health care service availability, such as closure of an emergency department or an acute care hospital. Our goal was to determine the best statistical method for prediction of arrival intervals for patients with acute myocardial infarction (AMI) symptoms. We compared the performance of multinomial logistic regression (MLR) and discriminant analysis (DA) models. Models for MLR and DA were developed using a dataset of 3,566 male veterans hospitalized with AMI in 81 VA Medical Centers in 1994-1995 throughout the United States. The dataset was randomly divided into a training set (n = 1,846) and a test set (n = 1,720). Arrival times were grouped into three intervals on the basis of treatment considerations: <6 hours, 6-12 hours, and >12 hours. One model for MLR and two models for DA were developed using the training dataset. One DA model had equal prior probabilities, and one DA model had proportional prior probabilities. Predictive performance of the models was compared using the test (n = 1,720) dataset. Using the test dataset, the proportions of patients in the three arrival time groups were 60.9% for <6 hours, 10.3% for 6-12 hours, and 28.8% for >12 hours after symptom onset. Whereas the overall predictive performance by MLR and DA with proportional priors was higher, the DA models with equal priors performed much better in the smaller groups. Correct classifications were 62.6% by MLR, 62.4% by DA using proportional prior probabilities, and 48.1% using equal prior probabilities of the groups. The misclassifications by MLR for the three groups were 9.5%, 100.0%, 74.2% for each time interval, respectively. Misclassifications by DA models were 9.8%, 100.0%, and 74.4% for the model with proportional priors and 47.6%, 79.5%, and 51.0% for the model with equal priors. The choice of MLR or DA with proportional priors, or DA with equal priors for monitoring time intervals of predicted hospital arrival time for a population should depend on the consequences of misclassification errors.

  11. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  12. The nonlinear model for emergence of stable conditions in gas mixture in force field

    NASA Astrophysics Data System (ADS)

    Kalutskov, Oleg; Uvarova, Liudmila

    2016-06-01

    The case of M-component liquid evaporation from the straight cylindrical capillary into N - component gas mixture in presence of external forces was reviewed. It is assumed that the gas mixture is not ideal. The stable states in gas phase can be formed during the evaporation process for the certain model parameter valuesbecause of the mass transfer initial equationsnonlinearity. The critical concentrations of the resulting gas mixture components (the critical component concentrations at which the stable states occur in mixture) were determined mathematically for the case of single-component fluid evaporation into two-component atmosphere. It was concluded that this equilibrium concentration ratio of the mixture components can be achieved by external force influence on the mass transfer processes. It is one of the ways to create sustainable gas clusters that can be used effectively in modern nanotechnology.

  13. A general mixture theory. I. Mixtures of spherical molecules

    NASA Astrophysics Data System (ADS)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  14. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    PubMed

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Thermodynamics of concentrated electrolyte mixtures and the prediction of mineral solubilities to high temperatures for mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O

    NASA Astrophysics Data System (ADS)

    Pabalan, Roberto T.; Pitzer, Kenneth S.

    1987-09-01

    Mineral solubilities in binary and ternary electrolyte mixtures in the system Na-K-Mg-Cl-SO 4-OH-H 2O are calculated to high temperatures using available thermodynamic data for solids and for aqueous electrolyte solutions. Activity and osmotic coefficients are derived from the ion-interaction model of Pitzer (1973, 1979) and co-workers, the parameters of which are evaluated from experimentally determined solution properties or from solubility data in binary and ternary mixtures. Excellent to good agreement with experimental solubilities for binary and ternary mixtures indicate that the model can be successfully used to predict mineral-solution equilibria to high temperatures. Although there are currently no theoretical forms for the temperature dependencies of the various model parameters, the solubility data in ternary mixtures can be adequately represented by constant values of the mixing term θ ij and values of ψ ijk which are either constant or have a simple temperature dependence. Since no additional parameters are needed to describe the thermodynamic properties of more complex electrolyte mixtures, the calculations can be extended to equilibrium studies relevant to natural systems. Examples of predicted solubilities are given for the quaternary system NaCl-KCl-MgCl 2-H 2O.

  16. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  17. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  18. A Mixtures-of-Trees Framework for Multi-Label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We propose a new probabilistic approach for multi-label classification that aims to represent the class posterior distribution P(Y|X). Our approach uses a mixture of tree-structured Bayesian networks, which can leverage the computational advantages of conditional tree-structured models and the abilities of mixtures to compensate for tree-structured restrictions. We develop algorithms for learning the model from data and for performing multi-label predictions using the learned model. Experiments on multiple datasets demonstrate that our approach outperforms several state-of-the-art multi-label classification methods. PMID:25927011

  19. Liquid class predictor for liquid handling of complex mixtures

    DOEpatents

    Seglke, Brent W [San Ramon, CA; Lekin, Timothy P [Livermore, CA

    2008-12-09

    A method of establishing liquid classes of complex mixtures for liquid handling equipment. The mixtures are composed of components and the equipment has equipment parameters. The first step comprises preparing a response curve for the components. The next step comprises using the response curve to prepare a response indicator for the mixtures. The next step comprises deriving a model that relates the components and the mixtures to establish the liquid classes.

  20. Regression mixture models: Does modeling the covariance between independent variables and latent classes improve the results?

    PubMed Central

    Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956

  1. A Concentration Addition Model to Assess Activation of the Pregnane X Receptor (PXR) by Pesticide Mixtures Found in the French Diet

    PubMed Central

    de Sousa, Georges; Nawaz, Ahmad; Cravedi, Jean-Pierre; Rahmani, Roger

    2014-01-01

    French consumers are exposed to mixtures of pesticide residues in part through food consumption. As a xenosensor, the pregnane X receptor (hPXR) is activated by numerous pesticides, the combined effect of which is currently unknown. We examined the activation of hPXR by seven pesticide mixtures most likely found in the French diet and their individual components. The mixture's effect was estimated using the concentration addition (CA) model. PXR transactivation was measured by monitoring luciferase activity in hPXR/HepG2 cells and CYP3A4 expression in human hepatocytes. The three mixtures with the highest potency were evaluated using the CA model, at equimolar concentrations and at their relative proportion in the diet. The seven mixtures significantly activated hPXR and induced the expression of CYP3A4 in human hepatocytes. Of the 14 pesticides which constitute the three most active mixtures, four were found to be strong hPXR agonists, four medium, and six weak. Depending on the mixture and pesticide proportions, additive, greater than additive or less than additive effects between compounds were demonstrated. Predictions of the combined effects were obtained with both real-life and equimolar proportions at low concentrations. Pesticides act mostly additively to activate hPXR, when present in a mixture. Modulation of hPXR activation and its target genes induction may represent a risk factor contributing to exacerbate the physiological response of the hPXR signaling pathways and to explain some adverse effects in humans. PMID:25028461

  2. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  3. Transient Catalytic Combustor Model With Detailed Gas and Surface Chemistry

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Mellish, Benjamin P.; Miller, Fletcher J.; Tien, James S.

    2005-01-01

    In this work, we numerically investigate the transient combustion of a premixed gas mixture in a narrow, perfectly-insulated, catalytic channel which can represent an interior channel of a catalytic monolith. The model assumes a quasi-steady gas-phase and a transient, thermally thin solid phase. The gas phase is one-dimensional, but it does account for heat and mass transfer in a direction perpendicular to the flow via appropriate heat and mass transfer coefficients. The model neglects axial conduction in both the gas and in the solid. The model includes both detailed gas-phase reactions and catalytic surface reactions. The reactants modeled so far include lean mixtures of dry CO and CO/H2 mixtures, with pure oxygen as the oxidizer. The results include transient computations of light-off and system response to inlet condition variations. In some cases, the model predicts two different steady-state solutions depending on whether the channel is initially hot or cold. Additionally, the model suggests that the catalytic ignition of CO/O2 mixtures is extremely sensitive to small variations of inlet equivalence ratios and parts per million levels of H2.

  4. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  5. Spurious Latent Classes in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Alexeev, Natalia; Templin, Jonathan; Cohen, Allan S.

    2011-01-01

    Mixture Rasch models have been used to study a number of psychometric issues such as goodness of fit, response strategy differences, strategy shifts, and multidimensionality. Although these models offer the potential for improving understanding of the latent variables being measured, under some conditions overextraction of latent classes may…

  6. Individual and binary toxicity of anatase and rutile nanoparticles towards Ceriodaphnia dubia.

    PubMed

    Iswarya, V; Bhuvaneshwari, M; Chandrasekaran, N; Mukherjee, Amitava

    2016-09-01

    Increasing usage of engineered nanoparticles, especially Titanium dioxide (TiO2) in various commercial products has necessitated their toxicity evaluation and risk assessment, especially in the aquatic ecosystem. In the present study, a comprehensive toxicity assessment of anatase and rutile NPs (individual as well as a binary mixture) has been carried out in a freshwater matrix on Ceriodaphnia dubia under different irradiation conditions viz., visible and UV-A. Anatase and rutile NPs produced an LC50 of about 37.04 and 48mg/L, respectively, under visible irradiation. However, lesser LC50 values of about 22.56 (anatase) and 23.76 (rutile) mg/L were noted under UV-A irradiation. A toxic unit (TU) approach was followed to determine the concentrations of binary mixtures of anatase and rutile. The binary mixture resulted in an antagonistic and additive effect under visible and UV-A irradiation, respectively. Among the two different modeling approaches used in the study, Marking-Dawson model was noted to be a more appropriate model than Abbott model for the toxicity evaluation of binary mixtures. The agglomeration of NPs played a significant role in the induction of antagonistic and additive effects by the mixture based on the irradiation applied. TEM and zeta potential analysis confirmed the surface interactions between anatase and rutile NPs in the mixture. Maximum uptake was noticed at 0.25 total TU of the binary mixture under visible irradiation and 1 TU of anatase NPs for UV-A irradiation. Individual NPs showed highest uptake under UV-A than visible irradiation. In contrast, binary mixture showed a difference in the uptake pattern based on the type of irradiation exposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Rasch Mixture Models for DIF Detection: A Comparison of Old and New Score Specifications

    ERIC Educational Resources Information Center

    Frick, Hannah; Strobl, Carolin; Zeileis, Achim

    2015-01-01

    Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…

  8. Modeling biofiltration of VOC mixtures under steady-state conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltzis, B.C.; Wojdyla, S.M.; Zarook, S.M.

    1997-06-01

    Treatment of air streams contaminated with binary volatile organic compound (VOC) mixtures in classical biofilters under steady-state conditions of operation was described with a general mathematical model. The model accounts for potential kinetic interactions among the pollutants, effects of oxygen availability on biodegradation, and biomass diversification in the filter bed. While the effects of oxygen were always taken into account, two distinct cases were considered for the experimental model validation. The first involves kinetic interactions, but no biomass differentiation, used for describing data from biofiltration of benzene/toluene mixtures. The second case assumes that each pollutant is treated by a differentmore » type of biomass. Each biomass type is assumed to form separate patches of biofilm on the solid packing material, thus kinetic interference does not occur. This model was used for describing biofiltration of ethanol/butanol mixtures. Experiments were performed with classical biofilters packed with mixtures of peat moss and perlite (2:3, volume:volume). The model equations were solved through the use of computer codes based on the fourth-order Runge-Kutta technique for the gas-phase mass balances and the method of orthogonal collocation for the concentration profiles in the biofilms. Good agreement between model predictions and experimental data was found in almost all cases. Oxygen was found to be extremely important in the case of polar VOCs (ethanol/butanol).« less

  9. Modeling the soil water retention curves of soil-gravel mixtures with regression method on the Loess Plateau of China.

    PubMed

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.

  10. Modeling the Soil Water Retention Curves of Soil-Gravel Mixtures with Regression Method on the Loess Plateau of China

    PubMed Central

    Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an

    2013-01-01

    Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040

  11. Phenomenological Modeling and Laboratory Simulation of Long-Term Aging of Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Elwardany, Michael Dawoud

    The accurate characterization of asphalt mixture properties as a function of pavement service life is becoming more important as more powerful pavement design and performance prediction methods are implemented. Oxidative aging is a major distress mechanism of asphalt pavements. Aging increases the stiffness and brittleness of the material, which leads to a high cracking potential. Thus, an improved understanding of the aging phenomenon and its effect on asphalt binder chemical and rheological properties will allow for the prediction of mixture properties as a function of pavement service life. Many researchers have conducted laboratory binder thin-film aging studies; however, this approach does not allow for studying the physicochemical effects of mineral fillers on age hardening rates in asphalt mixtures. Moreover, aging phenomenon in the field is governed by kinetics of binder oxidation, oxygen diffusion through mastic phase, and oxygen percolation throughout the air voids structure. In this study, laboratory aging trials were conducted on mixtures prepared using component materials of several field projects throughout the USA and Canada. Laboratory aged materials were compared against field cores sampled at different ages. Results suggested that oven aging of loose mixture at 95°C is the most promising laboratory long-term aging method. Additionally, an empirical model was developed in order to account for the effect of mineral fillers on age hardening rates in asphalt mixtures. Kinetics modeling was used to predict field aging levels throughout pavement thickness and to determine the required laboratory aging duration to match field aging. Kinetics model outputs are calibrated using measured data from the field to account for the effects of oxygen diffusion and percolation. Finally, the calibrated model was validated using independent set of field sections. This work is expected to provide basis for improved asphalt mixture and pavement design procedures in order to save taxpayers' money.

  12. Kinetics of methane production from the codigestion of switchgrass and Spirulina platensis algae.

    PubMed

    El-Mashad, Hamed M

    2013-03-01

    Anaerobic batch digestion of four feedstocks was conducted at 35 and 50 °C: switchgrass; Spirulina platensis algae; and two mixtures of both switchgrass and S. platensis. Mixture 1 was composed of 87% switchgrass (based on volatile solids) and 13% S. platensis. Mixture 2 was composed of 67% switchgrass and 33% S. platensis. The kinetics of methane production from these feedstocks was studied using four first order models: exponential, Gompertz, Fitzhugh, and Cone. The methane yields after 40days of digestion at 35 °C were 355, 127, 143 and 198 ml/g VS, respectively for S. platensis, switchgrass, and Mixtures 1 and 2, while the yields at 50 °C were 358, 167, 198, and 236 ml/g VS, respectively. Based on Akaike's information criterion, the Cone model best described the experimental data. The Cone model was validated with experimental data collected from the digestion of a third mixture that was composed of 83% switchgrass and 17% S. platensis. Published by Elsevier Ltd.

  13. Advanced stability indicating chemometric methods for quantitation of amlodipine and atorvastatin in their quinary mixture with acidic degradation products

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-02-01

    Two advanced, accurate and precise chemometric methods are developed for the simultaneous determination of amlodipine besylate (AML) and atorvastatin calcium (ATV) in the presence of their acidic degradation products in tablet dosage forms. The first method was Partial Least Squares (PLS-1) and the second was Artificial Neural Networks (ANN). PLS was compared to ANN models with and without variable selection procedure (genetic algorithm (GA)). For proper analysis, a 5-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the interfering species. Fifteen mixtures were used as calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested models. The proposed methods were successfully applied to the analysis of pharmaceutical tablets containing AML and ATV. The methods indicated the ability of the mentioned models to solve the highly overlapped spectra of the quinary mixture, yet using inexpensive and easy to handle instruments like the UV-VIS spectrophotometer.

  14. Thermal conductivity of disperse insulation materials and their mixtures

    NASA Astrophysics Data System (ADS)

    Geža, V.; Jakovičs, A.; Gendelis, S.; Usiļonoks, I.; Timofejevs, J.

    2017-10-01

    Development of new, more efficient thermal insulation materials is a key to reduction of heat losses and contribution to greenhouse gas emissions. Two innovative materials developed at Thermeko LLC are Izoprok and Izopearl. This research is devoted to experimental study of thermal insulation properties of both materials as well as their mixture. Results show that mixture of 40% Izoprok and 60% of Izopearl has lower thermal conductivity than pure materials. In this work, material thermal conductivity dependence temperature is also measured. Novel modelling approach is used to model spatial distribution of disperse insulation material. Computational fluid dynamics approach is also used to estimate role of different heat transfer phenomena in such porous mixture. Modelling results show that thermal convection plays small role in heat transfer despite large fraction of air within material pores.

  15. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  16. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    NASA Astrophysics Data System (ADS)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  17. Metal-Polycyclic Aromatic Hydrocarbon Mixture Toxicity in Hyalella azteca. 1. Response Surfaces and Isoboles To Measure Non-additive Mixture Toxicity and Ecological Risk.

    PubMed

    Gauthier, Patrick T; Norwood, Warren P; Prepas, Ellie E; Pyle, Greg G

    2015-10-06

    Mixtures of metals and polycyclic aromatic hydrocarbons (PAHs) occur ubiquitously in aquatic environments, yet relatively little is known regarding their potential to produce non-additive toxicity (i.e., antagonism or potentiation). A review of the lethality of metal-PAH mixtures in aquatic biota revealed that more-than-additive lethality is as common as strictly additive effects. Approaches to ecological risk assessment do not consider non-additive toxicity of metal-PAH mixtures. Forty-eight-hour water-only binary mixture toxicity experiments were conducted to determine the additive toxic nature of mixtures of Cu, Cd, V, or Ni with phenanthrene (PHE) or phenanthrenequinone (PHQ) using the aquatic amphipod Hyalella azteca. In cases where more-than-additive toxicity was observed, we calculated the possible mortality rates at Canada's environmental water quality guideline concentrations. We used a three-dimensional response surface isobole model-based approach to compare the observed co-toxicity in juvenile amphipods to predicted outcomes based on concentration addition or effects addition mixtures models. More-than-additive lethality was observed for all Cu-PHE, Cu-PHQ, and several Cd-PHE, Cd-PHQ, and Ni-PHE mixtures. Our analysis predicts Cu-PHE, Cu-PHQ, Cd-PHE, and Cd-PHQ mixtures at the Canadian Water Quality Guideline concentrations would produce 7.5%, 3.7%, 4.4% and 1.4% mortality, respectively.

  18. The simultaneous mass and energy evaporation (SM2E) model.

    PubMed

    Choudhary, Rehan; Klauda, Jeffery B

    2016-01-01

    In this article, the Simultaneous Mass and Energy Evaporation (SM2E) model is presented. The SM2E model is based on theoretical models for mass and energy transfer. The theoretical models systematically under or over predicted at various flow conditions: laminar, transition, and turbulent. These models were harmonized with experimental measurements to eliminate systematic under or over predictions; a total of 113 measured evaporation rates were used. The SM2E model can be used to estimate evaporation rates for pure liquids as well as liquid mixtures at laminar, transition, and turbulent flow conditions. However, due to limited availability of evaporation data, the model has so far only been tested against data for pure liquids and binary mixtures. The model can take evaporative cooling into account and when the temperature of the evaporating liquid or liquid mixture is known (e.g., isothermal evaporation), the SM2E model reduces to a mass transfer-only model.

  19. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    PubMed

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  1. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. An analysis of lethal and sublethal interactions among type I and type II pyrethroid pesticide mixtures using standard Hyalella azteca water column toxicity tests.

    PubMed

    Hoffmann, Krista Callinan; Deanovic, Linda; Werner, Inge; Stillway, Marie; Fong, Stephanie; Teh, Swee

    2016-10-01

    A novel 2-tiered analytical approach was used to characterize and quantify interactions between type I and type II pyrethroids in Hyalella azteca using standardized water column toxicity tests. Bifenthrin, permethrin, cyfluthrin, and lambda-cyhalothrin were tested in all possible binary combinations across 6 experiments. All mixtures were analyzed for 4-d lethality, and 2 of the 6 mixtures (permethrin-bifenthrin and permethrin-cyfluthrin) were tested for subchronic 10-d lethality and sublethal effects on swimming motility and growth. Mixtures were initially analyzed for interactions using regression analyses, and subsequently compared with the additive models of concentration addition and independent action to further characterize mixture responses. Negative interactions (antagonistic) were significant in 2 of the 6 mixtures tested, including cyfluthrin-bifenthrin and cyfluthrin-permethrin, but only on the acute 4-d lethality endpoint. In both cases mixture responses fell between the additive models of concentration addition and independent action. All other mixtures were additive across 4-d lethality, and bifenthrin-permethrin and cyfluthrin-permethrin were also additive in terms of subchronic 10-d lethality and sublethal responses. Environ Toxicol Chem 2016;35:2542-2549. © 2016 SETAC. © 2016 SETAC.

  3. Heat transfer during condensation of steam from steam-gas mixtures in the passive safety systems of nuclear power plants

    NASA Astrophysics Data System (ADS)

    Portnova, N. M.; Smirnov, Yu B.

    2017-11-01

    A theoretical model for calculation of heat transfer during condensation of multicomponent vapor-gas mixtures on vertical surfaces, based on film theory and heat and mass transfer analogy is proposed. Calculations were performed for the conditions implemented in experimental studies of heat transfer during condensation of steam-gas mixtures in the passive safety systems of PWR-type reactors of different designs. Calculated values of heat transfer coefficients for condensation of steam-air, steam-air-helium and steam-air-hydrogen mixtures at pressures of 0.2 to 0.6 MPa and of steam-nitrogen mixture at the pressures of 0.4 to 2.6 MPa were obtained. The composition of mixtures and vapor-to-surface temperature difference were varied within wide limits. Tube length ranged from 0.65 to 9.79m. The condensation of all steam-gas mixtures took place in a laminar-wave flow mode of condensate film and turbulent free convection in the diffusion boundary layer. The heat transfer coefficients obtained by calculation using the proposed model are in good agreement with the considered experimental data for both the binary and ternary mixtures.

  4. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    PubMed Central

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-01-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127

  5. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate.

    PubMed

    Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-26

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  6. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    NASA Astrophysics Data System (ADS)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  7. Nature and prevalence of non-additive toxic effects in industrially relevant mixtures of organic chemicals.

    PubMed

    Parvez, Shahid; Venkataraman, Chandra; Mukherji, Suparna

    2009-06-01

    The concentration addition (CA) and the independent action (IA) models are widely used for predicting mixture toxicity based on its composition and individual component dose-response profiles. However, the prediction based on these models may be inaccurate due to interaction among mixture components. In this work, the nature and prevalence of non-additive effects were explored for binary, ternary and quaternary mixtures composed of hydrophobic organic compounds (HOCs). The toxicity of each individual component and mixture was determined using the Vibrio fischeri bioluminescence inhibition assay. For each combination of chemicals specified by the 2(n) factorial design, the percent deviation of the predicted toxic effect from the measured value was used to characterize mixtures as synergistic (positive deviation) and antagonistic (negative deviation). An arbitrary classification scheme was proposed based on the magnitude of deviation (d) as: additive (< or =10%, class-I) and moderately (10< d < or =30 %, class-II), highly (30< d < or =50%, class-III) and very highly (>50%, class-IV) antagonistic/synergistic. Naphthalene, n-butanol, o-xylene, catechol and p-cresol led to synergism in mixtures while 1, 2, 4-trimethylbenzene and 1, 3-dimethylnaphthalene contributed to antagonism. Most of the mixtures depicted additive or antagonistic effect. Synergism was prominent in some of the mixtures, such as, pulp and paper, textile dyes, and a mixture composed of polynuclear aromatic hydrocarbons. The organic chemical industry mixture depicted the highest abundance of antagonism and least synergism. Mixture toxicity was found to depend on partition coefficient, molecular connectivity index and relative concentration of the components.

  8. Gaseous emissions from the combustion of a waste mixture containing a high concentration of N2O.

    PubMed

    Dong, Changqing; Yang, Yongping; Zhang, Junjiao; Lu, Xuefeng

    2009-01-01

    This paper is focused on reducing the emissions from the combustion of a waste mixture containing a high concentration of N2O. A rate model and an equilibrium model were used to predict gaseous emissions from the combustion of the mixture. The influences of temperature and methane were considered, and the experimental research was carried out in a tabular reactor and a pilot combustion furnace. The results showed that for the waste mixture, the combustion temperature should be in the range of 950-1100 degrees C and the gas residence time should be 2s or higher to reduce emissions.

  9. Mixtures of charged colloid and neutral polymer: Influence of electrostatic interactions on demixing and interfacial tension

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Matthias

    2005-06-01

    The equilibrium phase behavior of a binary mixture of charged colloids and neutral, nonadsorbing polymers is studied within free-volume theory. A model mixture of charged hard-sphere macroions and ideal, coarse-grained, effective-sphere polymers is mapped first onto a binary hard-sphere mixture with nonadditive diameters and then onto an effective Asakura-Oosawa model [S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954)]. The effective model is defined by a single dimensionless parameter—the ratio of the polymer diameter to the effective colloid diameter. For high salt-to-counterion concentration ratios, a free-volume approximation for the free energy is used to compute the fluid phase diagram, which describes demixing into colloid-rich (liquid) and colloid-poor (vapor) phases. Increasing the range of electrostatic interactions shifts the demixing binodal toward higher polymer concentration, stabilizing the mixture. The enhanced stability is attributed to a weakening of polymer depletion-induced attraction between electrostatically repelling macroions. Comparison with predictions of density-functional theory reveals a corresponding increase in the liquid-vapor interfacial tension. The predicted trends in phase stability are consistent with observed behavior of protein-polysaccharide mixtures in food colloids.

  10. Temporal patterns of variable relationships in person-oriented research: longitudinal models of configural frequency analysis.

    PubMed

    von Eye, Alexander; Mun, Eun Young; Bogat, G Anne

    2008-03-01

    This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting alpha, and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify those temporal patterns that stand out as more frequent (CFA types) or less frequent (CFA antitypes) than expected with reference to a base model. A base model that has been used frequently in CFA applications, prediction CFA, and a new base model, auto-association CFA, are discussed for analysis of cross-classifications of longitudinal data. The former base model takes the associations among predictors and among criteria into account. The latter takes the auto-associations among repeatedly observed variables into account. Application examples of each are given using data from a longitudinal study of domestic violence. It is demonstrated that CFA results are not redundant with results from log-linear modeling or multinomial regression and that, of these approaches, CFA shows particular utility when conducting person-oriented research.

  11. Assessment of combined antiandrogenic effects of binary parabens mixtures in a yeast-based reporter assay.

    PubMed

    Ma, Dehua; Chen, Lujun; Zhu, Xiaobiao; Li, Feifei; Liu, Cong; Liu, Rui

    2014-05-01

    To date, toxicological studies of endocrine disrupting chemicals (EDCs) have typically focused on single chemical exposures and associated effects. However, exposure to EDCs mixtures in the environment is common. Antiandrogens represent a group of EDCs, which draw increasing attention due to their resultant demasculinization and sexual disruption of aquatic organisms. Although there are a number of in vivo and in vitro studies investigating the combined effects of antiandrogen mixtures, these studies are mainly on selected model compounds such as flutamide, procymidone, and vinclozolin. The aim of the present study is to investigate the combined antiandrogenic effects of parabens, which are widely used antiandrogens in industrial and domestic commodities. A yeast-based human androgen receptor (hAR) assay (YAS) was applied to assess the antiandrogenic activities of n-propylparaben (nPrP), iso-propylparaben (iPrP), methylparaben (MeP), and 4-n-pentylphenol (PeP), as well as the binary mixtures of nPrP with each of the other three antiandrogens. All of the four compounds could exhibit antiandrogenic activity via the hAR. A linear interaction model was applied to quantitatively analyze the interaction between nPrP and each of the other three antiandrogens. The isoboles method was modified to show the variation of combined effects as the concentrations of mixed antiandrogens were changed. Graphs were constructed to show isoeffective curves of three binary mixtures based on the fitted linear interaction model and to evaluate the interaction of the mixed antiandrogens (synergism or antagonism). The combined effect of equimolar combinations of the three mixtures was also considered with the nonlinear isoboles method. The main effect parameters and interaction effect parameters in the linear interaction models of the three mixtures were different from zero. The results showed that any two antiandrogens in their binary mixtures tended to exert equal antiandrogenic activity in the linear concentration ranges. The antiandrogenicity of the binary mixture and the concentration of nPrP were fitted to a sigmoidal model if the concentrations of the other antiandrogens (iPrP, MeP, and PeP) in the mixture were lower than the AR saturation concentrations. Some concave isoboles above the additivity line appeared in all the three mixtures. There were some synergistic effects of the binary mixture of nPrP and MeP at low concentrations in the linear concentration ranges. Interesting, when the antiandrogens concentrations approached the saturation, the interaction between chemicals were antagonistic for all the three mixtures tested. When the toxicity of the three mixtures was assessed using nonlinear isoboles, only antagonism was observed for equimolar combinations of nPrP and iPrP as the concentrations were increased from the no-observed-effect-concentration (NOEC) to effective concentration of 80%. In addition, the interactions were changed from synergistic to antagonistic as effective concentrations were increased in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP. The combined effects of three binary antiandrogens mixtures in the linear ranges were successfully evaluated by curve fitting and isoboles. The combined effects of specific binary mixtures varied depending on the concentrations of the chemicals in the mixtures. At low concentrations in the linear concentration ranges, there was synergistic interaction existing in the binary mixture of nPrP and MeP. The interaction tended to be antagonistic as the antiandrogens approached saturation concentrations in mixtures of nPrP with each of the other three antiandrogens. The synergistic interaction was also found in the equimolar combinations of nPrP and MeP, as well as nPrP and PeP, at low concentrations with another method of nonlinear isoboles. The mixture activities of binary antiandrogens had a tendency towards antagonism at high concentrations and synergism at low concentrations.

  12. Mixture models in diagnostic meta-analyses--clustering summary receiver operating characteristic curves accounted for heterogeneity and correlation.

    PubMed

    Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario

    2015-01-01

    Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Quasi-equilibrium theory for the distribution of rare alleles in a subdivided population: justification and implications.

    PubMed

    Burr, T L

    2000-05-01

    This paper examines a quasi-equilibrium theory of rare alleles for subdivided populations that follow an island-model version of the Wright-Fisher model of evolution. All mutations are assumed to create new alleles. We present four results: (1) conditions for the theory to apply are formally established using properties of the moments of the binomial distribution; (2) approximations currently in the literature can be replaced with exact results that are in better agreement with our simulations; (3) a modified maximum likelihood estimator of migration rate exhibits the same good performance on island-model data or on data simulated from the multinomial mixed with the Dirichlet distribution, and (4) a connection between the rare-allele method and the Ewens Sampling Formula for the infinite-allele mutation model is made. This introduces a new and simpler proof for the expected number of alleles implied by the Ewens Sampling Formula. Copyright 2000 Academic Press.

  14. Neighborhood Structural Inequality, Collective Efficacy, and Sexual Risk Behavior among Urban Youth

    PubMed Central

    BROWNING, CHRISTOPHER R.; BURRINGTON, LORI A.; LEVENTHAL, TAMA; BROOKS-GUNN, JEANNE

    2011-01-01

    We draw on collective efficacy theory to extend a contextual model of early adolescent sexual behavior. Specifically, we hypothesize that neighborhood structural disadvantage—as measured by levels of concentrated poverty, residential instability, and aspects of immigrant concentration—and diminished collective efficacy have consequences for the prevalence of early adolescent multiple sexual partnering. Findings from random effects multinomial logistic regression models of the number of sexual partners among a sample of youth, age 11 to 16, from the Project on Human Development in Chicago Neighborhoods (N = 768) reveal evidence of neighborhood effects on adolescent higher-risk sexual activity. Collective efficacy is negatively associated with having two or more sexual partners versus one (but not zero versus one) sexual partner. The effect of collective efficacy is dependent upon age: The regulatory effect of collective efficacy increases for older adolescents. PMID:18771063

  15. Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.

    2002-01-01

    Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.

  16. A globally accurate theory for a class of binary mixture models

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  17. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts

    USGS Publications Warehouse

    Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.

    2013-01-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  18. Estimating abundance while accounting for rarity, correlated behavior, and other sources of variation in counts.

    PubMed

    Dorazio, Robert M; Martin, Julien; Edwards, Holly H

    2013-07-01

    The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.

  19. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  20. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    ERIC Educational Resources Information Center

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  1. Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications

    ERIC Educational Resources Information Center

    Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.

    2007-01-01

    Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…

  2. Kinetic model for the vibrational energy exchange in flowing molecular gas mixtures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Offenhaeuser, F.

    1987-01-01

    The present study is concerned with the development of a computational model for the description of the vibrational energy exchange in flowing gas mixtures, taking into account a given number of energy levels for each vibrational degree of freedom. It is possible to select an arbitrary number of energy levels. The presented model uses values in the range from 10 to approximately 40. The distribution of energy with respect to these levels can differ from the equilibrium distribution. The kinetic model developed can be employed for arbitrary gaseous mixtures with an arbitrary number of vibrational degrees of freedom for each type of gas. The application of the model to CO2-H2ON2-O2-He mixtures is discussed. The obtained relations can be utilized in a study of the suitability of radiation-related transitional processes, involving the CO2 molecule, for laser applications. It is found that the computational results provided by the model agree very well with experimental data obtained for a CO2 laser. Possibilities for the activation of a 16-micron and 14-micron laser are considered.

  3. MODEL OF ADDITIVE EFFECTS OF MIXTURES OF NARCOTIC CHEMICALS

    EPA Science Inventory

    Biological effects data with single chemicals are far more abundant than with mixtures. et, environmental exposures to chemical mixtures, for example near hazardous waste sites or nonpoint sources, are very common and using test data from single chemicals to approximate effects o...

  4. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

  5. Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures

    NASA Astrophysics Data System (ADS)

    Dadzie, S. Kokou

    2012-10-01

    We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.

  6. Estimation of performance of a J-T refrigerators operating with nitrogen-hydrocarbon mixtures and a coiled tubes-in-tube heat exchanger

    NASA Astrophysics Data System (ADS)

    Satya Meher, R.; Venkatarathnam, G.

    2018-06-01

    The exergy efficiency of Joule-Thomson (J-T) refrigerators operating with mixtures (MRC systems) strongly depends on the choice of refrigerant mixture and the performance of the heat exchanger used. Helically coiled, multiple tubes-in-tube heat exchangers with an effectiveness of over 96% are widely used in these types of systems. All the current studies focus only on the different heat transfer correlations and the uncertainty in predicting performance of the heat exchanger alone. The main focus of this work is to estimate the uncertainty in cooling capacity when the homogenous model is used by comparing the theoretical and experimental studies. The comparisons have been extended to some two-phase models present in the literature as well. Experiments have been carried out on a J-T refrigerator at a fixed heat load of 10 W with different nitrogen-hydrocarbon mixtures in the evaporator temperature range of 100-120 K. Different heat transfer models have been used to predict the temperature profiles as well as the cooling capacity of the refrigerator. The results show that the homogenous two-phase flow model is probably the most suitable model for rating the cooling capacity of a J-T refrigerator operating with nitrogen-hydrocarbon mixtures.

  7. Interactions and Toxicity of Cu-Zn mixtures to Hordeum vulgare in Different Soils Can Be Rationalized with Bioavailability-Based Prediction Models.

    PubMed

    Qiu, Hao; Versieren, Liske; Rangel, Georgina Guzman; Smolders, Erik

    2016-01-19

    Soil contamination with copper (Cu) is often associated with zinc (Zn), and the biological response to such mixed contamination is complex. Here, we investigated Cu and Zn mixture toxicity to Hordeum vulgare in three different soils, the premise being that the observed interactions are mainly due to effects on bioavailability. The toxic effect of Cu and Zn mixtures on seedling root elongation was more than additive (i.e., synergism) in soils with high and medium cation-exchange capacity (CEC) but less than additive (antagonism) in a low-CEC soil. This was found when we expressed the dose as the conventional total soil concentration. In contrast, antagonism was found in all soils when we expressed the dose as free-ion activities in soil solution, indicating that there is metal-ion competition for binding to the plant roots. Neither a concentration addition nor an independent action model explained mixture effects, irrespective of the dose expressions. In contrast, a multimetal BLM model and a WHAM-Ftox model successfully explained the mixture effects across all soils and showed that bioavailability factors mainly explain the interactions in soils. The WHAM-Ftox model is a promising tool for the risk assessment of mixed-metal contamination in soils.

  8. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  9. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  10. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  11. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    NASA Astrophysics Data System (ADS)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered particle-size distributions or mixture composition. Control charts based on tessellation measurements were used for direct, quantitative comparisons between real and simulated mixtures. Four sets of simulated and real mixtures were examined. Data from real mixture was matched with simulated data when the samples were well mixed and the particle size distributions and volume fractions of the components were identical. Analysis of mixture components that occupied less than approximately 10 vol% of the mixture was not practical unless the particle size of the component was extremely small and excellent quality high-resolution compositional micrographs of the real sample are available. These methods of analysis should allow future researchers to systematically evaluate and predict the impact and importance of variables such as component volume fraction and component particle size distribution as they pertain to the uniformity of powder mixture microstructures.

  12. Decline in Kidney Function among Apparently Healthy Young Adults at Risk of Mesoamerican Nephropathy.

    PubMed

    Gonzalez-Quiroz, Marvin; Smpokou, Evangelia-Theano; Silverwood, Richard J; Camacho, Armando; Faber, Dorien; Garcia, Brenda La Rosa; Oomatia, Amin; Hill, Michael; Glaser, Jason; Le Blond, Jennifer; Wesseling, Catharina; Aragon, Aurora; Smeeth, Liam; Pearce, Neil; Nitsch, Dorothea; Caplin, Ben

    2018-06-15

    Background Epidemic levels of CKD of undetermined cause, termed Mesoamerican nephropathy in Central America, have been found in low- and middle-income countries. We investigated the natural history of, and factors associated with, loss of kidney function in a population at high risk for this disease. Methods We conducted a 2-year prospective, longitudinal study with follow-up every 6 months in nine rural communities in northwestern Nicaragua and included all men ( n =263) and a random sample of women ( n =87) ages 18-30 years old without self-reported CKD, diabetes, or hypertension. We used growth mixture modeling to identify subgroups of eGFR trajectory and weighted multinomial logistic regression to examine associations with proposed risk factors. Results Among men, we identified three subpopulations of eGFR trajectory (mean baseline eGFR; mean eGFR change over follow-up): 81% remained stable (116 ml/min per 1.73 m 2 ; -0.6 ml/min per 1.73 m 2 per year), 9.5% experienced rapid decline despite normal baseline function (112 ml/min per 1.73 m 2 ; -18.2 ml/min per 1.73 m 2 per year), and 9.5% had baseline dysfunction (58 ml/min per 1.73 m 2 ; -3.8 ml/min per 1.73 m 2 per year). Among women: 96.6% remained stable (121 ml/min per 1.73 m 2 ; -0.6 ml/min per 1.73 m 2 per year), and 3.4% experienced rapid decline (132 ml/min per 1.73 m 2 ; -14.6 ml/min per 1.73 m 2 per year; n =3 women). Among men, outdoor and agricultural work and lack of shade availability during work breaks, reported at baseline, were associated with rapid decline. Conclusions Although Mesoamerican nephropathy is associated with agricultural work, other factors may also contribute to this disease. Copyright © 2018 by the American Society of Nephrology.

  13. Patterns and Determinants of Double-Burden of Malnutrition among Rural Children: Evidence from China.

    PubMed

    Zhang, Nan; Bécares, Laia; Chandola, Tarani

    2016-01-01

    Chinese children are facing dual burden of malnutrition-coexistence of under-and over-nutrition. Little systematic evidence exists for explaining the simultaneous presence of under-and over-nutrition. This study aims to explore underlying mechanisms of under-and over-nutrition among children in rural China. This study used a nationwide longitudinal dataset of children (N = 5,017) from 9 provinces across China, with four exclusively categories of nutritional outcomes including under-nutrition (stunting and underweight), over-nutrition (overweight only including obesity), paradox (stunted overweight), with normal nutrition as reference. Multinomial logit models (Level-1: occasions; Level-2: children; Level-3: villages) were fitted which corrected for non-independence of observations due to geographic clustering and repeated observations of individuals. A mixture of risk factors at the individual, household and neighbourhood levels predicted under-and over-nutrition among children in rural China. Improved socioeconomic status and living in more urbanised villages reduced the risk of stunted overweight among rural children in China. Young girls appeared to have higher risk of under-nutrition, and the risk decreased with age more markedly than for boys up to age 5. From age 5 onwards, boys tended to have higher risk of under-nutrition than girls. Girls aged around 12 and older were less likely to suffer from under-nutrition, while boys' higher risk of under-nutrition persisted throughout adolescence. Children were less likely to suffer from over-nutrition compared to normal nutrition. Boys tended to have an even lower risk of over-nutrition than girls and the gender difference widened with age until adolescence. Our results have important policy implications that improving household economic status, in particular, maternal education and health insurance for children, and living environment are important to enhance rural children's nutritional status in China. Investments in early years of childhood can be effective to reduce gender inequality in nutritional health in rural China.

  14. Patterns and Determinants of Double-Burden of Malnutrition among Rural Children: Evidence from China

    PubMed Central

    Zhang, Nan; Bécares, Laia; Chandola, Tarani

    2016-01-01

    Chinese children are facing dual burden of malnutrition—coexistence of under-and over-nutrition. Little systematic evidence exists for explaining the simultaneous presence of under-and over-nutrition. This study aims to explore underlying mechanisms of under-and over-nutrition among children in rural China. This study used a nationwide longitudinal dataset of children (N = 5,017) from 9 provinces across China, with four exclusively categories of nutritional outcomes including under-nutrition (stunting and underweight), over-nutrition (overweight only including obesity), paradox (stunted overweight), with normal nutrition as reference. Multinomial logit models (Level-1: occasions; Level-2: children; Level-3: villages) were fitted which corrected for non-independence of observations due to geographic clustering and repeated observations of individuals. A mixture of risk factors at the individual, household and neighbourhood levels predicted under-and over-nutrition among children in rural China. Improved socioeconomic status and living in more urbanised villages reduced the risk of stunted overweight among rural children in China. Young girls appeared to have higher risk of under-nutrition, and the risk decreased with age more markedly than for boys up to age 5. From age 5 onwards, boys tended to have higher risk of under-nutrition than girls. Girls aged around 12 and older were less likely to suffer from under-nutrition, while boys’ higher risk of under-nutrition persisted throughout adolescence. Children were less likely to suffer from over-nutrition compared to normal nutrition. Boys tended to have an even lower risk of over-nutrition than girls and the gender difference widened with age until adolescence. Our results have important policy implications that improving household economic status, in particular, maternal education and health insurance for children, and living environment are important to enhance rural children’s nutritional status in China. Investments in early years of childhood can be effective to reduce gender inequality in nutritional health in rural China. PMID:27391448

  15. Land Use, Residential Density, and Walking

    PubMed Central

    Rodríguez, Daniel A.; Evenson, Kelly R.; Diez Roux, Ana V.; Brines, Shannon J.

    2009-01-01

    Background The neighborhood environment may play a role in encouraging sedentary patterns, especially for middle-aged and older adults. Purpose Associations between walking and neighborhood population density, retail availability, and land use distribution were examined using data from a cohort of adults aged 45 to 84 years old. Methods Data from a multi-ethnic sample of 5529 adult residents of Baltimore MD, Chicago IL, Forsyth County NC, Los Angeles CA, New York NY, and St. Paul MN, enrolled in the Multi-Ethnic Study of Atherosclerosis in 2000–2002 were linked to secondary land use and population data. Participant reports of access to destinations and stores and objective measures of the percentage of land area in parcels devoted to retail land uses, the population divided by land area in parcels, and the mixture of uses for areas within 200m of each participant's residence were examined. Multinomial logistic regression was used to investigate associations of self-reported and objective neighborhood characteristics with walking. All analyses were conducted in 2008 and 2009. Results After adjustment for individual-level characteristics and neighborhood connectivity, higher density, greater land area devoted to retail uses, and self-reported measures of proximity of destinations and ease of walking to places were each related to walking. In models including all land use measures, population density was positively associated with walking to places and with walking for exercise for more than 90 min/wk both relative to no walking. Availability of retail was associated with walking to places relative to not walking, having a more proportional mix of land uses was associated with walking for exercise for more than 90 min/wk, while self-reported ease of access to places was related to higher levels of exercise walking both relative to not walking. Conclusions Residential density and the presence of retail uses are related to various walking behaviors. Efforts to increase walking may benefit from attention to the intensity and type of land development. PMID:19840694

  16. Land use, residential density, and walking. The multi-ethnic study of atherosclerosis.

    PubMed

    Rodríguez, Daniel A; Evenson, Kelly R; Diez Roux, Ana V; Brines, Shannon J

    2009-11-01

    The neighborhood environment may play a role in encouraging sedentary patterns, especially for middle-aged and older adults. The aim of this study was to examine the associations between walking and neighborhood population density, retail availability, and land-use distribution using data from a cohort of adults aged 45 to 84 years. Data from a multi-ethnic sample of 5529 adult residents of Baltimore MD, Chicago IL, Forsyth County NC, Los Angeles CA, New York NY, and St. Paul MN enrolled in the Multi-Ethnic Study of Atherosclerosis in 2000-2002 were linked to secondary land-use and population data. Participant reports of access to destinations and stores and objective measures of the percentage of land area in parcels devoted to retail land uses, the population divided by land area in parcels, and the mixture of uses for areas within 200 m of each participant's residence were examined. Multinomial logistic regression was used to investigate associations of self-reported and objective neighborhood characteristics with walking. All analyses were conducted in 2008 and 2009. After adjustment for individual-level characteristics and neighborhood connectivity, it was found that higher density, greater land area devoted to retail uses, and self-reported proximity of destinations and ease of walking to places were each related to walking. In models including all land-use measures, population density was positively associated with walking to places and with walking for exercise for more than 90 minutes/week, both relative to no walking. Availability of retail was associated with walking to places relative to not walking, and having a more proportional mix of land uses was associated with walking for exercise for more than 90 minutes/week, while self-reported ease of access to places was related to higher levels of exercise walking, both relative to not walking. Residential density and the presence of retail uses are related to various walking behaviors. Efforts to increase walking may benefit from attention to the intensity and type of land development.

  17. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  18. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  19. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    NASA Astrophysics Data System (ADS)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  20. Poverty and Material Hardship in Grandparent-Headed Households.

    PubMed

    Baker, Lindsey A; Mutchler, Jan E

    2010-08-01

    Using the 2001 Survey of Income and Program Participation, the current study examines poverty and material hardship among children living in 3-generation (n = 486), skipped-generation (n = 238), single-parent (n = 2,076), and 2-parent (n = 6,061) households. Multinomial and logistic regression models indicated that children living in grandparent-headed households experience elevated risk of health insecurity (as measured by receipt of public insurance and uninsurance)-a disproportionate risk given rates of poverty within those households. Children living with single parents did not share this substantial risk. Risk of food and housing insecurity did not differ significantly from 2-parent households once characteristics of the household and caregivers were taken into account.

Top