Science.gov

Sample records for mixed models lmms

  1. A brief introduction to mixed effects modelling and multi-model inference in ecology

    PubMed Central

    Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961

  2. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    PubMed

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  3. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  4. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  5. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  6. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Inference on the Genetic Basis of Eye and Skin Color in an Admixed Population via Bayesian Linear Mixed Models.

    PubMed

    Lloyd-Jones, Luke R; Robinson, Matthew R; Moser, Gerhard; Zeng, Jian; Beleza, Sandra; Barsh, Gregory S; Tang, Hua; Visscher, Peter M

    2017-06-01

    Genetic association studies in admixed populations are underrepresented in the genomics literature, with a key concern for researchers being the adequate control of spurious associations due to population structure. Linear mixed models (LMMs) are well suited for genome-wide association studies (GWAS) because they account for both population stratification and cryptic relatedness and achieve increased statistical power by jointly modeling all genotyped markers. Additionally, Bayesian LMMs allow for more flexible assumptions about the underlying distribution of genetic effects, and can concurrently estimate the proportion of phenotypic variance explained by genetic markers. Using three recently published Bayesian LMMs, Bayes R, BSLMM, and BOLT-LMM, we investigate an existing data set on eye ( n = 625) and skin ( n = 684) color from Cape Verde, an island nation off West Africa that is home to individuals with a broad range of phenotypic values for eye and skin color due to the mix of West African and European ancestry. We use simulations to demonstrate the utility of Bayesian LMMs for mapping loci and studying the genetic architecture of quantitative traits in admixed populations. The Bayesian LMMs provide evidence for two new pigmentation loci: one for eye color ( AHRR ) and one for skin color ( DDB1 ). Copyright © 2017 by the Genetics Society of America.

  8. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    PubMed

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  9. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Warped linear mixed models for the genetic analysis of transformed phenotypes

    PubMed Central

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D.; Stegle, Oliver

    2014-01-01

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype–phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction. PMID:25234577

  11. Warped linear mixed models for the genetic analysis of transformed phenotypes.

    PubMed

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver

    2014-09-19

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.

  12. Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Xiaorun; Zhao, Liaoying

    2016-01-01

    Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.

  13. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    PubMed Central

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-01-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science. PMID:25387525

  14. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    NASA Astrophysics Data System (ADS)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  15. Further improvements to linear mixed models for genome-wide association studies.

    PubMed

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-12

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  16. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer’s Disease

    PubMed Central

    Wang, Xulong; Philip, Vivek M.; Ananda, Guruprasad; White, Charles C.; Malhotra, Ankit; Michalski, Paul J.; Karuturi, Krishna R. Murthy; Chintalapudi, Sumana R.; Acklin, Casey; Sasner, Michael; Bennett, David A.; De Jager, Philip L.; Howell, Gareth R.; Carter, Gregory W.

    2018-01-01

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost, whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized LMM (GLMM) in a Bayesian framework, called Bayes-GLMM. Bayes-GLMM has four major features: (1) support of categorical, binary, and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer’s Disease Sequencing Project. This study contains 570 individuals from 111 families, each with Alzheimer’s disease diagnosed at one of four confidence levels. Using Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer’s disease. Two variants, rs140233081 and rs149372995, lie between PRKAR1B and PDGFA. The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with Alzheimer’s disease-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed-model approach in a Bayesian framework for association studies. PMID:29507048

  17. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  18. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    PubMed

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  19. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    PubMed

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  20. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  1. Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models.

    PubMed

    Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha

    2018-01-01

    Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.

  2. Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models

    PubMed Central

    Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha

    2018-01-01

    Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets. PMID:29472836

  3. Determining major factors controlling phosphorus removal by promising adsorbents used for lake restoration: A linear mixed model approach.

    PubMed

    Funes, A; Martínez, F J; Álvarez-Manzaneda, I; Conde-Porcuna, J M; de Vicente, J; Guerrero, F; de Vicente, I

    2018-05-17

    Phosphorus (P) removal from lake/drainage waters by novel adsorbents may be affected by competitive substances naturally present in the aqueous media. Up to date, the effect of interfering substances has been studied basically on simple matrices (single-factor effects) or by applying basic statistical approaches when using natural lake water. In this study, we determined major factors controlling P removal efficiency in 20 aquatic ecosystems in the southeast Spain by using linear mixed models (LMMs). Two non-magnetic -CFH-12 ® and Phoslock ® - and two magnetic materials -hydrous lanthanum oxide loaded silica-coated magnetite (Fe-Si-La) and commercial zero-valent iron particles (FeHQ)- were tested to remove P at two adsorbent dosages. Results showed that the type of adsorbent, the adsorbent dosage and color of water (indicative of humic substances) are major factors controlling P removal efficiency. Differences in physico-chemical properties (i.e. surface charge or specific surface), composition and structure explain differences in maximum P adsorption capacity and performance of the adsorbents when competitive ions are present. The highest P removal efficiency, independently on whether the adsorbent dosage was low or high, were 85-100% for Phoslock and CFH-12 ® , 70-100% for Fe-Si-La and 0-15% for FeHQ. The low dosage of FeHQ, compared to previous studies, explained its low P removal efficiency. Although non-magnetic materials were the most efficient, magnetic adsorbents (especially Fe-Si-La) could be proposed for P removal as they can be recovered along with P and be reused, potentially making them more profitable in a long-term period. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  5. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like

  6. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings.

    PubMed

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H; Luthardt, Ralph Gunnar; Kuhn, Katharina

    2018-02-01

    The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was -15.7 to 3.5 μm for LMMs and -3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was -1.3 to 2.3 μm. LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still clinically acceptable. LMMs showed very high intra

  7. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings

    PubMed Central

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H.; LUTHARDT, Ralph Gunnar; Kuhn, Katharina

    2018-01-01

    Abstract The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Material and Methods Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). Results For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was −15.7 to 3.5 μm for LMMs and −3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was −1.3 to 2.3 μm. Conclusion LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still

  8. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  9. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  10. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  11. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  12. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  13. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    NASA Astrophysics Data System (ADS)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  14. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  15. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  16. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  17. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  18. Use and abuse of mixing models (MixSIAR)

    EPA Science Inventory

    Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...

  19. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive

  20. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive

  1. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  2. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  3. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  4. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  5. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  6. Model-Independent Bounds on Kinetic Mixing

    DOE PAGES

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.

    2011-01-01

    New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less

  7. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  8. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  9. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, Richard P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end‐members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end‐members, an extension of the mathematics of mixing models is presented that assesses the “fit” of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end‐members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end‐members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  10. Quantifying spatial distribution of spurious mixing in ocean models.

    PubMed

    Ilıcak, Mehmet

    2016-12-01

    Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.

  11. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  12. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  13. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  14. Among-tree variability and feedback effects result in different growth responses to climate change at the upper treeline in the Swiss Alps.

    PubMed

    Jochner, Matthias; Bugmann, Harald; Nötzli, Magdalena; Bigler, Christof

    2017-10-01

    Upper treeline ecotones are important life form boundaries and particularly sensitive to a warming climate. Changes in growth conditions at these ecotones have wide-ranging implications for the provision of ecosystem services in densely populated mountain regions like the European Alps. We quantify climate effects on short- and long-term tree growth responses, focusing on among-tree variability and potential feedback effects. Although among-tree variability is thought to be substantial, it has not been considered systematically yet in studies on growth-climate relationships. We compiled tree-ring data including almost 600 trees of major treeline species ( Larix decidua , Picea abies , Pinus cembra , and Pinus mugo ) from three climate regions of the Swiss Alps. We further acquired tree size distribution data using unmanned aerial vehicles. To account for among-tree variability, we employed information-theoretic model selections based on linear mixed-effects models (LMMs) with flexible choice of monthly temperature effects on growth. We isolated long-term trends in ring-width indices (RWI) in interaction with elevation. The LMMs revealed substantial amounts of previously unquantified among-tree variability, indicating different strategies of single trees regarding when and to what extent to invest assimilates into growth. Furthermore, the LMMs indicated strongly positive temperature effects on growth during short summer periods across all species, and significant contributions of fall ( L. decidua ) and current year's spring ( L. decidua , P. abies ). In the longer term, all species showed consistently positive RWI trends at highest elevations, but different patterns with decreasing elevation. L. decidua exhibited even negative RWI trends compared to the highest treeline sites, whereas P. abies , P. cembra , and P. mugo showed steeper or flatter trends with decreasing elevation. This does not only reflect effects of ameliorated climate conditions on tree

  15. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  16. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  19. Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies

    PubMed Central

    Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David

    2011-01-01

    Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897

  20. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  1. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  2. MRMAide: a mixed resolution modeling aide

    NASA Astrophysics Data System (ADS)

    Treshansky, Allyn; McGraw, Robert M.

    2002-07-01

    The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.

  3. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  4. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  5. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  6. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  7. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  8. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  9. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  10. Postural Instability Caused by Extended Bed Rest Is Alleviated by Brief Daily Exposure to Low Magnitude Mechanical Signals

    PubMed Central

    Muir, Jesse; Judex, Stefan; Qin, Yi-Xian; Rubin, Clinton

    2011-01-01

    Loss of postural stability, as exacerbated by chronic bed rest, aging, neuromuscular injury or disease, results in a marked increase in the risk of falls, potentiating severe injury and even death. To investigate the capacity of low magnitude mechanical signals (LMMS) to retain postural stability under conditions conducive to its decline, twenty-nine healthy adult subjects underwent 90 days of 6-degree head down tilt bed-rest. Treated subjects underwent a daily 10 minute regimen of 30 Hz LMMS at either a 0.3g-force (n=12) or 0.5g force (n=5). Control subjects (n=13) received no LMMS treatment. Postural stability, quantified by dispersions of the plantar-based center of pressure, deteriorated significantly from baseline in control subjects, with displacement and velocity at 60d increasing 98.7% and 193% respectively, while the LMMS group increased only 26.7% and 6.4%, reflecting a 73% and 97% relative retention in stability as compared to control. Increasing LMMS magnitude from 0.3 to 0.5g had no significant influence on outcomes. LMMS failed to spare loss of muscle extension strength, but helped to retain flexion strength (e.g., 46.2% improved retention of baseline concentric flexion strength vs. untreated controls; p=0.01). These data suggest the potential of extremely small mechanical signals as a non-invasive means of preserving postural control under the challenge of chronic bed rest, and may ultimately represent non-pharmacologic means of reducing the risk of debilitating falls in elderly and infirm. PMID:21273076

  11. Mix Model Comparison of Low Feed-Through Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.

    2016-10-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  13. Photoionized Mixing Layer Models of the Diffuse Ionized Gas

    NASA Astrophysics Data System (ADS)

    Binette, Luc; Flores-Fajardo, Nahiely; Raga, Alejandro C.; Drissen, Laurent; Morisset, Christophe

    2009-04-01

    It is generally believed that O stars, confined near the galactic midplane, are somehow able to photoionize a significant fraction of what is termed the "diffuse ionized gas" (DIG) of spiral galaxies, which can extend up to 1-2 kpc above the galactic midplane. The heating of the DIG remains poorly understood, however, as simple photoionization models do not reproduce the observed line ratio correlations well or the DIG temperature. We present turbulent mixing layer (TML) models in which warm photoionized condensations are immersed in a hot supersonic wind. Turbulent dissipation and mixing generate an intermediate region where the gas is accelerated, heated, and mixed. The emission spectrum of such layers is compared with observations of Rand of the DIG in the edge-on spiral NGC 891. We generate two sequence of models that fit the line ratio correlations between [S II]/Hα, [O I]/Hα, [N II]/[S II], and [O III]/Hβ reasonably well. In one sequence of models, the hot wind velocity increases, while in the other, the ionization parameter and layer opacity increase. Despite the success of the mixing layer models, the overall efficiency in reprocessing the stellar UV is much too low, much less than 1%, which compels us to reject the TML model in its present form.

  14. Surface wind mixing in the Regional Ocean Modeling System (ROMS)

    NASA Astrophysics Data System (ADS)

    Robertson, Robin; Hartlipp, Paul

    2017-12-01

    Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.

  15. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  16. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  17. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE PAGES

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...

    2016-08-01

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  18. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  19. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  20. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  1. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  2. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  3. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  4. An S 4 model inspired from self-complementary neutrino mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Xinyi

    2018-03-01

    We build an S 4 model for neutrino masses and mixings based on the self-complementary (SC) neutrino mixing pattern. The SC mixing is constructed from the self-complementarity relation plus {δ }CP}=-\\tfrac{π }{2}. We elaborately construct the model at a percent level of accuracy to reproduce the structure given by the SC mixing. After performing a numerical study on the model’s parameter space, we find that in the case of normal ordering, the model can give predictions on the observables that are compatible with their 3σ ranges, and give predictions for the not-yet observed quantities like the lightest neutrino mass m 1 ∈ [0.003, 0.010] eV and the Dirac CP violating phase {δ }CP}\\in [256.72^\\circ ,283.33^\\circ ].

  5. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  6. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  7. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  8. Modeling of Low Feed-Through CD Mix Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert

    2015-11-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.

  9. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  10. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  11. Mixed-order phase transition in a one-dimensional model.

    PubMed

    Bar, Amir; Mukamel, David

    2014-01-10

    We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.

  12. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    NASA Astrophysics Data System (ADS)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  13. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  14. The Apollo 16 regolith - A petrographically-constrained chemical mixing model

    NASA Technical Reports Server (NTRS)

    Kempa, M. J.; Papike, J. J.; White, C.

    1980-01-01

    A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.

  15. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  16. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  17. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  18. Model free simulations of a high speed reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1992-01-01

    The effects of compressibility, chemical reaction exothermicity and non-equilibrium chemical modeling in a combusting plane mixing layer were investigated by means of two-dimensional model free numerical simulations. It was shown that increased compressibility generally had a stabilizing effect, resulting in reduced mixing and chemical reaction conversion rate. The appearance of 'eddy shocklets' in the flow was observed at high convective Mach numbers. Reaction exothermicity was found to enhance mixing at the initial stages of the layer's growth, but had a stabilizing effect at later times. Calculations were performed for a constant rate chemical rate kinetics model and an Arrhenius type kinetics prototype. The Arrhenius model was found to cause a greater temperature increase due to reaction than the constant kinetics model. This had the same stabilizing effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  19. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  20. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  1. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  2. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  3. Modeling of mixing in 96-well microplates observed with fluorescence indicators.

    PubMed

    Weiss, Svenja; John, Gernot T; Klimant, Ingo; Heinzle, Elmar

    2002-01-01

    Mixing in 96-well microplates was studied using soluble pH indicators and a fluorescence pH sensor. Small amounts of alkali were added with the aid of a multichannel pipet, a piston pump, and a piezoelectric actuator. Mixing patterns were observed visually using a video camera. Addition of drops each of about 1 nL with the piezoelectric actuator resulted in umbrella and double-disklike shapes. Convective mixing was mainly observed in the upper part of the well, whereas the lower part was only mixed quickly when using the multichannel pipet and the piston pump with an addition volume of 5 microL or larger. Estimated mixing times were between a few seconds and several minutes. Mixing by liquid dispensing was much more effective than by shaking. A mixing model consisting of 21 elements could describe mixing dynamics observed by the dissolved fluorescence dye and by the optical immobilized pH sensor. This model can be applied for designing pH control in microplates or for design of kinetic experiments with liquid addition.

  4. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  5. An a priori DNS study of the shadow-position mixing model

    DOE PAGES

    Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...

    2016-01-15

    In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce

  6. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentrationmore » retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic

  7. The use of Argo for validation and tuning of mixed layer models

    NASA Astrophysics Data System (ADS)

    Acreman, D. M.; Jeffery, C. D.

    We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.

  8. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  9. Modeling condensation with a noncondensable gas for mixed convection flow

    NASA Astrophysics Data System (ADS)

    Liao, Yehong

    2007-05-01

    This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface

  10. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  11. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  12. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  13. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  14. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  15. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  16. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  17. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  18. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  19. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  20. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  1. Using Bayesian Stable Isotope Mixing Models to Enhance Marine Ecosystem Models

    EPA Science Inventory

    The use of stable isotopes in food web studies has proven to be a valuable tool for ecologists. We investigated the use of Bayesian stable isotope mixing models as constraints for an ecosystem model of a temperate seagrass system on the Atlantic coast of France. δ13C and δ15N i...

  2. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  3. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  4. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    NASA Astrophysics Data System (ADS)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, R; Gallagher, B; Neville, J

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less

  6. Estimating the numerical diapycnal mixing in the GO5.0 ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George

    2014-05-01

    Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.

  7. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  9. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  10. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  11. Modelling ice microphysics of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  12. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  13. Modeling populations of rotationally mixed massive stars

    NASA Astrophysics Data System (ADS)

    Brott, I.

    2011-02-01

    Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive

  14. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  15. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  16. Conservative mixing, competitive mixing and their applications

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2010-12-01

    In many of the models applied to simulations of turbulent transport and turbulent combustion, the mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. Stochastic particles with properties and mixing can be used not only for simulating turbulent combustion, but also for modeling a large spectrum of physical phenomena. Traditional mixing, which is commonly used in the modeling of turbulent reacting flows, is conservative: the total amount of scalar is (or should be) preserved during a mixing event. It is worthwhile, however, to consider a more general mixing that does not possess these conservative properties; hence, our consideration lies beyond traditional mixing. In non-conservative mixing, the particle post-mixing average becomes biased towards one of the particles participating in mixing. The extreme form of non-conservative mixing can be called competitive mixing or competition: after a mixing event, the loser particle simply receives the properties of the winner particle. Particles with non-conservative mixing can be used to emulate various phenomena involving competition. In particular, we investigate cyclic behavior that can be attributed to complex competing systems. We show that the localness and intransitivity of competitive mixing are linked to the cyclic behavior.

  17. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  18. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  19. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  20. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  1. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  2. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  3. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  4. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous

  5. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random

  6. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  7. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  8. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  9. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  10. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  11. The salinity effect in a mixed layer ocean model

    NASA Technical Reports Server (NTRS)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  12. BDA special care case mix model.

    PubMed

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  13. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  14. Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?

    NASA Astrophysics Data System (ADS)

    Morris, David; Macko, Stephen

    2016-04-01

    Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.

  15. Mixed Phase Modeling in GlennICE with Application to Engine Icing

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.

    2011-01-01

    A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.

  16. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  17. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  18. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems

  19. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    NASA Astrophysics Data System (ADS)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  20. A mixed model framework for teratology studies.

    PubMed

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  1. Evaluation of Aerosol Mixing State Classes in the GISS Modele-matrix Climate Model Using Single-particle Mass Spectrometry Measurements

    NASA Technical Reports Server (NTRS)

    Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.

    2013-01-01

    Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.

  2. Best practices for use of stable isotope mixing models in food-web studies

    EPA Science Inventory

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  3. Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

    PubMed

    Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2012-06-01

    This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

  4. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  5. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  6. Metapopulation epidemic models with heterogeneous mixing and travel behaviour

    PubMed Central

    2014-01-01

    Background Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. Methods We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. Results We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in

  7. Metapopulation epidemic models with heterogeneous mixing and travel behaviour.

    PubMed

    Apolloni, Andrea; Poletto, Chiara; Ramasco, José J; Jensen, Pablo; Colizza, Vittoria

    2014-01-13

    Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in the activity of each group

  8. Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model

    NASA Astrophysics Data System (ADS)

    Prakash, K. R.; Nigam, T.; Pant, V.

    2017-12-01

    Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.

  9. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  10. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  11. One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion

    NASA Astrophysics Data System (ADS)

    Ghantous, M.; Babanin, A. V.

    2014-02-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  12. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  13. Influence of non-homogeneous mixing on final epidemic size in a meta-population model.

    PubMed

    Cui, Jingan; Zhang, Yanan; Feng, Zhilan

    2018-06-18

    In meta-population models for infectious diseases, the basic reproduction number [Formula: see text] can be as much as 70% larger in the case of preferential mixing than that in homogeneous mixing [J.W. Glasser, Z. Feng, S.B. Omer, P.J. Smith, and L.E. Rodewald, The effect of heterogeneity in uptake of the measles, mumps, and rubella vaccine on the potential for outbreaks of measles: A modelling study, Lancet ID 16 (2016), pp. 599-605. doi: 10.1016/S1473-3099(16)00004-9 ]. This suggests that realistic mixing can be an important factor to consider in order for the models to provide a reliable assessment of intervention strategies. The influence of mixing is more significant when the population is highly heterogeneous. In this paper, another quantity, the final epidemic size ([Formula: see text]) of an outbreak, is considered to examine the influence of mixing and population heterogeneity. Final size relation is derived for a meta-population model accounting for a general mixing. The results show that [Formula: see text] can be influenced by the pattern of mixing in a significant way. Another interesting finding is that, heterogeneity in various sub-population characteristics may have the opposite effect on [Formula: see text] and [Formula: see text].

  14. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  15. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model

  16. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable

  17. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    PubMed

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  18. Fermion masses and mixing in general warped extra dimensional models

    NASA Astrophysics Data System (ADS)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  19. A refined and dynamic cellular automaton model for pedestrian-vehicle mixed traffic flow

    NASA Astrophysics Data System (ADS)

    Liu, Mianfang; Xiong, Shengwu

    2016-12-01

    Mixed traffic flow sharing the “same lane” and having no discipline on road is a common phenomenon in the developing countries. For example, motorized vehicles (m-vehicles) and nonmotorized vehicles (nm-vehicles) may share the m-vehicle lane or nm-vehicle lane and pedestrians may share the nm-vehicle lane. Simulating pedestrian-vehicle mixed traffic flow consisting of three kinds of traffic objects: m-vehicles, nm-vehicles and pedestrians, can be a challenge because there are some erratic drivers or pedestrians who fail to follow the lane disciplines. In the paper, we investigate various moving and interactive behavior associated with mixed traffic flow, such as lateral drift including illegal lane-changing and transverse crossing different lanes, overtaking and forward movement, and propose some new moving and interactive rules for pedestrian-vehicle mixed traffic flow based on a refined and dynamic cellular automaton (CA) model. Simulation results indicate that the proposed model can be used to investigate the traffic flow characteristic in a mixed traffic flow system and corresponding complicated traffic problems, such as, the moving characteristics of different traffic objects, interaction phenomenon between different traffic objects, traffic jam, traffic conflict, etc., which are consistent with the actual mixed traffic system. Therefore, the proposed model provides a solid foundation for the management, planning and evacuation of the mixed traffic flow.

  20. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  1. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  2. Multifractal Modeling of Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Samiee, Mehdi; Zayernouri, Mohsen; Meerschaert, Mark M.

    2017-11-01

    Stochastic processes in random media are emerging as interesting tools for modeling anomalous transport phenomena. Applications include intermittent passive scalar transport with background noise in turbulent flows, which are observed in atmospheric boundary layers, turbulent mixing in reactive flows, and long-range dependent flow fields in disordered/fractal environments. In this work, we propose a nonlocal scalar transport equation involving the fractional Laplacian, where the corresponding fractional index is linked to the multifractal structure of the nonlinear passive scalar power spectrum. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  3. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    NASA Astrophysics Data System (ADS)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.

    2017-02-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]-[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]-[Fe/H] unlike the observed bimodality (separate high-α and low-α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]-[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α-elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  4. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth

  5. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Data on copula modeling of mixed discrete and continuous neural time series.

    PubMed

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  7. Lagrangian Mixing in an Axisymmetric Hurricane Model

    DTIC Science & Technology

    2010-07-23

    The MMR r is found by tak - ing the log of the time-series 6ρ(t)−A1, where A1 is 90% of the minimum value of6ρ(t), and the slope of the linear func...Advective mixing in a nondivergent barotropic hurricane model, Atmos. Chem. Phys., 10, 475 –497, doi:10.5194/acp-10- 475 -2010, 2010. Salman, H., Ide, K

  8. Mixed-phase cloud physics and Southern Ocean cloud feedback in climate models

    DOE PAGES

    McCoy, Daniel T.; Hartmann, Dennis L.; Zelinka, Mark D.; ...

    2015-08-21

    Increasing optical depth poleward of 45° is a robust response to warming in global climate models. Much of this cloud optical depth increase has been hypothesized to be due to transitions from ice-dominated to liquid-dominated mixed-phase cloud. In this study, the importance of liquid-ice partitioning for the optical depth feedback is quantified for 19 Coupled Model Intercomparison Project Phase 5 models. All models show a monotonic partitioning of ice and liquid as a function of temperature, but the temperature at which ice and liquid are equally mixed (the glaciation temperature) varies by as much as 40 K across models. Modelsmore » that have a higher glaciation temperature are found to have a smaller climatological liquid water path (LWP) and condensed water path and experience a larger increase in LWP as the climate warms. The ice-liquid partitioning curve of each model may be used to calculate the response of LWP to warming. It is found that the repartitioning between ice and liquid in a warming climate contributes at least 20% to 80% of the increase in LWP as the climate warms, depending on model. Intermodel differences in the climatological partitioning between ice and liquid are estimated to contribute at least 20% to the intermodel spread in the high-latitude LWP response in the mixed-phase region poleward of 45°S. As a result, it is hypothesized that a more thorough evaluation and constraint of global climate model mixed-phase cloud parameterizations and validation of the total condensate and ice-liquid apportionment against observations will yield a substantial reduction in model uncertainty in the high-latitude cloud response to warming.« less

  9. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  10. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Numerical Study of Mixing Thermal Conductivity Models for Nanofluid Heat Transfer Enhancement

    NASA Astrophysics Data System (ADS)

    Pramuanjaroenkij, A.; Tongkratoke, A.; Kakaç, S.

    2018-01-01

    Researchers have paid attention to nanofluid applications, since nanofluids have revealed their potentials as working fluids in many thermal systems. Numerical studies of convective heat transfer in nanofluids can be based on considering them as single- and two-phase fluids. This work is focused on improving the single-phase nanofluid model performance, since the employment of this model requires less calculation time and it is less complicated due to utilizing the mixing thermal conductivity model, which combines static and dynamic parts used in the simulation domain alternately. The in-house numerical program has been developed to analyze the effects of the grid nodes, effective viscosity model, boundary-layer thickness, and of the mixing thermal conductivity model on the nanofluid heat transfer enhancement. CuO-water, Al2O3-water, and Cu-water nanofluids are chosen, and their laminar fully developed flows through a rectangular channel are considered. The influence of the effective viscosity model on the nanofluid heat transfer enhancement is estimated through the average differences between the numerical and experimental results for the nanofluids mentioned. The nanofluid heat transfer enhancement results show that the mixing thermal conductivity model consisting of the Maxwell model as the static part and the Yu and Choi model as the dynamic part, being applied to all three nanofluids, brings the numerical results closer to the experimental ones. The average differences between those results for CuO-water, Al2O3-water, and CuO-water nanofluid flows are 3.25, 2.74, and 3.02%, respectively. The mixing thermal conductivity model has been proved to increase the accuracy of the single-phase nanofluid simulation and to reveal its potentials in the single-phase nanofluid numerical studies.

  12. Modeling Photodetachment from HO2- Using the pd Case of the Generalized Mixed Character Molecular Orbital Model

    NASA Astrophysics Data System (ADS)

    Blackstone, Christopher C.; Sanov, Andrei

    2016-06-01

    Using the generalized model for photodetachment of electrons from mixed-character molecular orbitals, we gain insight into the nature of the HOMO of HO2- by treating it as a coherent superpostion of one p- and one d-type atomic orbital. Fitting the pd model function to the ab initio calculated HOMO of HO2- yields a fractional d-character, γp, of 0.979. The modeled curve of the anisotropy parameter, β, as a function of electron kinetic energy for a pd-type mixed character orbital is matched to the experimental data.

  13. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  14. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracksmore » in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.« less

  15. Study on system dynamics of evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  16. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  17. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    ERIC Educational Resources Information Center

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  18. Development and validation of a turbulent-mix model for variable-density and compressible flows.

    PubMed

    Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  19. Attribution of horizontal and vertical contributions to spurious mixing in an Arbitrary Lagrangian-Eulerian ocean model

    NASA Astrophysics Data System (ADS)

    Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair

    2017-11-01

    We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.

  20. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  1. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  2. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  3. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    PubMed

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  4. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  6. Evaluating targeted interventions via meta-population models with multi-level mixing.

    PubMed

    Feng, Zhilan; Hill, Andrew N; Curns, Aaron T; Glasser, John W

    2017-05-01

    Among the several means by which heterogeneity can be modeled, Levins' (1969) meta-population approach preserves the most analytical tractability, a virtue to the extent that generality is desirable. When model populations are stratified, contacts among their respective sub-populations must be described. Using a simple meta-population model, Feng et al. (2015) showed that mixing among sub-populations, as well as heterogeneity in characteristics affecting sub-population reproduction numbers, must be considered when evaluating public health interventions to prevent or control infectious disease outbreaks. They employed the convex combination of preferential within- and proportional among-group contacts first described by Nold (1980) and subsequently generalized by Jacquez et al. (1988). As the utility of meta-population modeling depends on more realistic mixing functions, the authors added preferential contacts between parents and children and among co-workers (Glasser et al., 2012). Here they further generalize this function by including preferential contacts between grandparents and grandchildren, but omit workplace contacts. They also describe a general multi-level mixing scheme, provide three two-level examples, and apply two of them. In their first application, the authors describe age- and gender-specific patterns in face-to-face conversations (Mossong et al., 2008), proxies for contacts by which respiratory pathogens might be transmitted, that are consistent with everyday experience. This suggests that meta-population models with inter-generational mixing could be employed to evaluate prolonged school-closures, a proposed pandemic mitigation measure that could expose grandparents, and other elderly surrogate caregivers for working parents, to infectious children. In their second application, the authors use a meta-population SEIR model stratified by 7 age groups and 50 states plus the District of Columbia, to compare actual with optimal vaccination during the

  7. Mixed layer modeling in the East Pacific warm pool during 2002

    NASA Astrophysics Data System (ADS)

    Van Roekel, Luke P.; Maloney, Eric D.

    2012-06-01

    Two vertical mixing models (the modified dynamic instability model of Price et al.; PWP, and K-Profile Parameterizaton; KPP) are used to analyze intraseasonal sea surface temperature (SST) variability in the northeast tropical Pacific near the Costa Rica Dome during boreal summer of 2002. Anomalies in surface latent heat flux and shortwave radiation are the root cause of the three intraseasonal SST oscillations of order 1°C amplitude that occur during this time, although surface stress variations have a significant impact on the third event. A slab ocean model that uses observed monthly varying mixed layer depths and accounts for penetrating shortwave radiation appears to well-simulate the first two SST oscillations, but not the third. The third oscillation is associated with small mixed layer depths (<5 m) forced by, and acting with, weak surface stresses and a stabilizing heat flux that cause a transient spike in SST of 2°C. Intraseasonal variations in freshwater flux due to precipitation and diurnal flux variability do not significantly impact these intraseasonal oscillations. These results suggest that a slab ocean coupled to an atmospheric general circulation model, as used in previous studies of east Pacific intraseasonal variability, may not be entirely adequate to realistically simulate SST variations. Further, while most of the results from the PWP and KPP models are similar, some important differences that emerge are discussed.

  8. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    EPA Science Inventory

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  9. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  10. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  11. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  12. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  13. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  14. Horizontal mixing coefficients for two-dimensional chemical models calculated from National Meteorological Center Data

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Schoeberl, M. R.; Plumb, R. A.

    1986-01-01

    Calculations of the two-dimensional, species-independent mixing coefficients for two-dimensional chemical models for the troposphere and stratosphere are performed using quasi-geostrophic potential vorticity fluxes and gradients from 4 years of National Meteorological Center data for the four seasons in both hemispheres. Results show that the horizontal mixing coefficient values for the winter lower stratosphere are broadly consistent with those currently employed in two-dimensional models, but the horizontal mixing coefficient values in the northern winter upper stratosphere are much larger than those usually used.

  15. Individuality in harpsichord performance: disentangling performer- and piece-specific influences on interpretive choices

    PubMed Central

    Gingras, Bruno; Asselin, Pierre-Yves; McAdams, Stephen

    2013-01-01

    Although a growing body of research has examined issues related to individuality in music performance, few studies have attempted to quantify markers of individuality that transcend pieces and musical styles. This study aims to identify such meta-markers by discriminating between influences linked to specific pieces or interpretive goals and performer-specific playing styles, using two complementary statistical approaches: linear mixed models (LMMs) to estimate fixed (piece and interpretation) and random (performer) effects, and similarity analyses to compare expressive profiles on a note-by-note basis across pieces and expressive parameters. Twelve professional harpsichordists recorded three pieces representative of the Baroque harpsichord repertoire, including three interpretations of one of these pieces, each emphasizing a different melodic line, on an instrument equipped with a MIDI console. Four expressive parameters were analyzed: articulation, note onset asynchrony, timing, and velocity. LMMs showed that piece-specific influences were much larger for articulation than for other parameters, for which performer-specific effects were predominant, and that piece-specific influences were generally larger than effects associated with interpretive goals. Some performers consistently deviated from the mean values for articulation and velocity across pieces and interpretations, suggesting that global measures of expressivity may in some cases constitute valid markers of artistic individuality. Similarity analyses detected significant associations among the magnitudes of the correlations between the expressive profiles of different performers. These associations were found both when comparing across parameters and within the same piece or interpretation, or on the same parameter and across pieces or interpretations. These findings suggest the existence of expressive meta-strategies that can manifest themselves across pieces, interpretive goals, or expressive devices

  16. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The

  17. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing

  18. Sensitivity of WallDYN material migration modeling to uncertainties in mixed-material surface binding energies

    DOE PAGES

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    2017-03-09

    The WallDYN package has recently been applied to a number of tokamaks to self-consistently model the evolution of mixed-material plasma facing surfaces. A key component of the WallDYN model is the concentration-dependent surface sputtering rate, calculated using SDTRIM.SP. This modeled sputtering rate is strongly influenced by the surface binding energies (SBEs) of the constituent materials, which are well known for pure elements but often are poorly constrained for mixed-materials. This work examines the sensitivity of WallDYN surface evolution calculations to different models for mixed-material SBEs, focusing on the carbon/lithium/oxygen/deuterium system present in NSTX. A realistic plasma background is reconstructed frommore » a high density, H-mode NSTX discharge, featuring an attached outer strike point with local density and temperature of 4 × 10 20 m -3 and 4 eV, respectively. It is found that various mixed-material SBE models lead to significant qualitative and quantitative changes in the surface evolution profile at the outer divertor, with the highest leverage parameter being the C-Li binding model. Uncertainties of order 50%, appearing on time scales relevant to tokamak experiments, highlight the importance of choosing an appropriate mixed-material sputtering representation when modeling the surface evolution of plasma facing components. Lastly, these results are generalized to other fusion-relevant materials with different ranges of SBEs.« less

  19. Sensitivity of WallDYN material migration modeling to uncertainties in mixed-material surface binding energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    The WallDYN package has recently been applied to a number of tokamaks to self-consistently model the evolution of mixed-material plasma facing surfaces. A key component of the WallDYN model is the concentration-dependent surface sputtering rate, calculated using SDTRIM.SP. This modeled sputtering rate is strongly influenced by the surface binding energies (SBEs) of the constituent materials, which are well known for pure elements but often are poorly constrained for mixed-materials. This work examines the sensitivity of WallDYN surface evolution calculations to different models for mixed-material SBEs, focusing on the carbon/lithium/oxygen/deuterium system present in NSTX. A realistic plasma background is reconstructed frommore » a high density, H-mode NSTX discharge, featuring an attached outer strike point with local density and temperature of 4 × 10 20 m -3 and 4 eV, respectively. It is found that various mixed-material SBE models lead to significant qualitative and quantitative changes in the surface evolution profile at the outer divertor, with the highest leverage parameter being the C-Li binding model. Uncertainties of order 50%, appearing on time scales relevant to tokamak experiments, highlight the importance of choosing an appropriate mixed-material sputtering representation when modeling the surface evolution of plasma facing components. Lastly, these results are generalized to other fusion-relevant materials with different ranges of SBEs.« less

  20. [Lethal anaphylactic shock model induced by human mixed serum in guinea pigs].

    PubMed

    Ren, Guang-Mu; Bai, Ji-Wei; Gao, Cai-Rong

    2005-08-01

    To establish an anaphylactic shock model induced by human mixed serum in guinea pigs. Eighteen guinea pigs were divided into two groups: sensitized and control, The sensitized group were immunized intracutaneously with human mixed serum and then induced by endocardiac injection after 3 weeks. Symptoms of anaphylactic shock appeared in the sensitized group. The level of serum IgE were increased in the sensitized group significantly. An animal model of anaphylactic shock wer established successfully. It provide a tool for both forensic study and anaphylactic shock therapy.

  1. Nonlinear mixed modeling of basal area growth for shortleaf pine

    Treesearch

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2008-01-01

    Mixed model estimation methods were used to fit individual-tree basal area growth models to tree and stand-level measurements available from permanent plots established in naturally regenerated shortleaf pine (Pinus echinata Mill.) even-aged stands in western Arkansas and eastern Oklahoma in the USA. As a part of the development of a comprehensive...

  2. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  3. On the TAP Free Energy in the Mixed p-Spin Models

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2018-05-01

    Thouless et al. (Phys Mag 35(3):593-601, 1977), derived a representation for the free energy of the Sherrington-Kirkpatrick model, called the TAP free energy, written as the difference of the energy and entropy on the extended configuration space of local magnetizations with an Onsager correction term. In the setting of mixed p-spin models with Ising spins, we prove that the free energy can indeed be written as the supremum of the TAP free energy over the space of local magnetizations whose Edwards-Anderson order parameter (self-overlap) is to the right of the support of the Parisi measure. Furthermore, for generic mixed p-spin models, we prove that the free energy is equal to the TAP free energy evaluated on the local magnetization of any pure state.

  4. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  5. Mixing behavior of a model cellulosic biomass slurry during settling and resuspension

    DOE PAGES

    Crawford, Nathan C.; Sprague, Michael A.; Stickel, Jonathan J.

    2016-01-29

    Thorough mixing during biochemical deconstruction of biomass is crucial for achieving maximum process yields and economic success. However, due to the complex morphology and surface chemistry of biomass particles, biomass mixing is challenging and currently it is not well understood. This study investigates the bulk rheology of negatively buoyant, non-Brownian α-cellulose particles during settling and resuspension. The torque signal of a vane mixer across two distinct experimental setups (vane-in-cup and vane-in-beaker) was used to understand how mixing conditions affect the distribution of biomass particles. During experimentation, a bifurcated torque response as a function of vane speed was observed, indicating thatmore » the slurry transitions from a “settling-dominant” regime to a “suspension-dominant” regime. The torque response of well-characterized fluids (i.e., DI water) were then used to empirically identify when sufficient mixing turbulence was established in each experimental setup. The predicted critical mixing speeds were in agreement with measured values, suggesting that secondary flows are required in order to keep the cellulose particles fully suspended. In addition, a simple scaling relationship was developed to model the entire torque signal of the slurry throughout settling and resuspension. Furthermore, qualitative and semi-quantitative agreement between the model and experimental results was observed.« less

  6. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    NASA Astrophysics Data System (ADS)

    Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades

    2010-04-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  7. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    USGS Publications Warehouse

    Solomon, D. Kip; Genereux, David P.; Plummer, Niel; Busenberg, Eurybiades

    2010-01-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC‐11, CFC‐12, CFC‐113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near‐zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  8. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  10. Retrospective Binary-Trait Association Test Elucidates Genetic Architecture of Crohn Disease

    PubMed Central

    Jiang, Duo; Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    In genetic association testing, failure to properly control for population structure can lead to severely inflated type 1 error and power loss. Meanwhile, adjustment for relevant covariates is often desirable and sometimes necessary to protect against spurious association and to improve power. Many recent methods to account for population structure and covariates are based on linear mixed models (LMMs), which are primarily designed for quantitative traits. For binary traits, however, LMM is a misspecified model and can lead to deteriorated performance. We propose CARAT, a binary-trait association testing approach based on a mixed-effects quasi-likelihood framework, which exploits the dichotomous nature of the trait and achieves computational efficiency through estimating equations. We show in simulation studies that CARAT consistently outperforms existing methods and maintains high power in a wide range of population structure settings and trait models. Furthermore, CARAT is based on a retrospective approach, which is robust to misspecification of the phenotype model. We apply our approach to a genome-wide analysis of Crohn disease, in which we replicate association with 17 previously identified regions. Moreover, our analysis on 5p13.1, an extensively reported region of association, shows evidence for the presence of multiple independent association signals in the region. This example shows how CARAT can leverage known disease risk factors to shed light on the genetic architecture of complex traits. PMID:26833331

  11. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  12. Evaluation of Five Tests for Sensitivity to Functional Deficits following Cervical or Thoracic Dorsal Column Transection in the Rat

    PubMed Central

    Eggers, Ruben; Tuinenbreijer, Lizz; Kouwenhoven, Dorette; Verhaagen, Joost; Mason, Matthew R. J.

    2016-01-01

    The dorsal column lesion model of spinal cord injury targets sensory fibres which originate from the dorsal root ganglia and ascend in the dorsal funiculus. It has the advantages that fibres can be specifically traced from the sciatic nerve, verifiably complete lesions can be performed of the labelled fibres, and it can be used to study sprouting in the central nervous system from the conditioning lesion effect. However, functional deficits from this type of lesion are mild, making assessment of experimental treatment-induced functional recovery difficult. Here, five functional tests were compared for their sensitivity to functional deficits, and hence their suitability to reliably measure recovery of function after dorsal column injury. We assessed the tape removal test, the rope crossing test, CatWalk gait analysis, and the horizontal ladder, and introduce a new test, the inclined rolling ladder. Animals with dorsal column injuries at C4 or T7 level were compared to sham-operated animals for a duration of eight weeks. As well as comparing groups at individual timepoints we also compared the longitudinal data over the whole time course with linear mixed models (LMMs), and for tests where steps are scored as success/error, using generalized LMMs for binomial data. Although, generally, function recovered to sham levels within 2–6 weeks, in most tests we were able to detect significant deficits with whole time-course comparisons. On the horizontal ladder deficits were detected until 5–6 weeks. With the new inclined rolling ladder functional deficits were somewhat more consistent over the testing period and appeared to last for 6–7 weeks. Of the CatWalk parameters base of support was sensitive to cervical and thoracic lesions while hind-paw print-width was affected by cervical lesion only. The inclined rolling ladder test in combination with the horizontal ladder and the CatWalk may prove useful to monitor functional recovery after experimental treatment in this

  13. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  14. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    NASA Astrophysics Data System (ADS)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  15. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  16. New theory of stellar convection without the mixing-length parameter: new stellar atmosphere model

    NASA Astrophysics Data System (ADS)

    Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.

    2018-01-01

    Stellar convection is usually described by the mixing-length theory, which makes use of the mixing-length scale factor to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is proportional to the local pressure scale height of the star, and the proportionality factor (i.e. mixing-length parameter) is determined by comparing the stellar models to some calibrator, i.e. the Sun. No strong arguments exist to suggest that the mixing-length parameter is the same in all stars and all evolutionary phases and because of this, all stellar models in the literature are hampered by this basic uncertainty. In a recent paper [1] we presented a new theory that does not require the mixing length parameter. Our self-consistent analytical formulation of stellar convection determines all the properties of stellar convection as a function of the physical behavior of the convective elements themselves and the surrounding medium. The new theory of stellar convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations expressed in a non-inertial reference frame co-moving with the convective elements. The motion of stellar convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time-dependent formalism. The predictions of the new theory are compared with those from the standard mixing-length paradigm with positive results for atmosphere models of the Sun and all the stars in the Hertzsprung-Russell diagram.

  17. The Performance of IRT Model Selection Methods with Mixed-Format Tests

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2012-01-01

    When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…

  18. Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.

    2016-02-01

    Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.

  19. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  20. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere-ocean-wave model

    NASA Astrophysics Data System (ADS)

    Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh

    2018-04-01

    A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  1. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  2. An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.

    2016-11-01

    Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.

  3. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  4. Model of Mixing Layer With Multicomponent Evaporating Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Le Clercq, Patrick

    2004-01-01

    A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas

  5. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  6. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less

  7. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance atmore » full-scale.« less

  8. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  9. System dynamics of behaviour-evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Cheng-Ling; Gao, Jie-Ping; Chen, Fang

    2010-11-01

    In real financial markets there are two kinds of traders: one is fundamentalist, and the other is a trend-follower. The mix-game model is proposed to mimic such phenomena. In a mix-game model there are two groups of agents: Group 1 plays the majority game and Group 2 plays the minority game. In this paper, we investigate such a case that some traders in real financial markets could change their investment behaviours by assigning the evolutionary abilities to agents: if the winning rates of agents are smaller than a threshold, they will join the other group; and agents will repeat such an evolution at certain time intervals. Through the simulations, we obtain the following findings: (i) the volatilities of systems increase with the increase of the number of agents in Group 1 and the times of behavioural changes of all agents; (ii) the performances of agents in both groups and the stabilities of systems become better if all agents take more time to observe their new investment behaviours; (iii) there are two-phase zones of market and non-market and two-phase zones of evolution and non-evolution; (iv) parameter configurations located within the cross areas between the zones of markets and the zones of evolution are suited for simulating the financial markets.

  10. Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber

    NASA Technical Reports Server (NTRS)

    Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.

    2006-01-01

    Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with

  11. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  12. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE PAGES

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    2018-01-10

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  13. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  15. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  16. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity

    NASA Technical Reports Server (NTRS)

    Kartuzova, O.; Kassemi, M.

    2016-01-01

    A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.

  18. Mix Models Applied to the Pushered Single Shell Capsules Fired on NIF1

    NASA Astrophysics Data System (ADS)

    Tipton, Robert; Dewald, Eduard; Pino, Jesse; Ralph, Joe; Sacks, Ryan; Salmonson, Jay

    2017-10-01

    The goal of the Pushered Single Shell (PSS) experimental campaign is to study the mix of partially ionized ablator material into the hotspot. To accomplish this goal, we used a uniformly Si doped plastic capsule based on the successful Two-Shock campaign. The inner few microns of the capsule can be doped with a few percent Ge. To diagnose mix, we used the method of separated reactants; deuterating the inner Ge-doped layer, CD/Ge, while using a gas fill of Tritium and Hydrogen. Mix is inferred by measuring the neutron yields from DD, DT, and TT reactions. The PSS implosion is fast ( 400 km/sec), hot ( 3KeV) and round (P2 0). This paper will present the calculations of RANS type mix models such as KL along with LES models such as multicomponent Navier Stokes on several PSS shots. The calculations will be compared to each other and to the measured data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  19. Mixing Phenomena in a Bottom Blown Copper Smelter: A Water Model Study

    NASA Astrophysics Data System (ADS)

    Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Akbar Rhamdhani, M.; Nguyen, Anh; Zhao, Baojun

    2015-03-01

    The first commercial bottom blown oxygen copper smelting furnace has been installed and operated at Dongying Fangyuan Nonferrous Metals since 2008. Significant advantages have been demonstrated in this technology mainly due to its bottom blown oxygen-enriched gas. In this study, a scaled-down 1:12 model was set up to simulate the flow behavior for understanding the mixing phenomena in the furnace. A single lance was used in the present study for gas blowing to establish a reliable research technique and quantitative characterisation of the mixing behavior. Operating parameters such as horizontal distance from the blowing lance, detector depth, bath height, and gas flow rate were adjusted to investigate the mixing time under different conditions. It was found that when the horizontal distance between the lance and detector is within an effective stirring range, the mixing time decreases slightly with increasing the horizontal distance. Outside this range, the mixing time was found to increase with increasing the horizontal distance and it is more significant on the surface. The mixing time always decreases with increasing gas flow rate and bath height. An empirical relationship of mixing time as functions of gas flow rate and bath height has been established first time for the horizontal bottom blowing furnace.

  20. A novel modeling approach to the mixing process in twin-screw extruders

    NASA Astrophysics Data System (ADS)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  1. Analysis of the type II robotic mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  2. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. Intercomparison of garnet barometers and implications for garnet mixing models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anovitz, L.M.; Essene, E.J.

    1985-01-01

    Several well-calibrated barometers are available in the system Ca-Fe-Ti-Al-Si-O, including: Alm+3Ru-3Ilm+Sil+2Qtz (GRAIL), 2Alm+Grreverse arrow6Ru=6Ilm+3An+3Qtz (GRIPS); 2Alm+Gr=3Fa+3An (FAG); 3AnGr+Ky+Qtz (GASP); 2Fs-Fa+Qtz (FFQ); and Gr+Qtz=An+2Wo (WAGS). GRIPS, GRAIL and GASP form a linearly dependent set such that any two should yield the third given an a/X model for the grossular/almandine solid-solution. Application to barometry of garnet granulite assemblages from the Grenville in Ontario yields average pressures 0.1 kb lower for GRIPS and 0.4 kb higher for FAGS using our mixing model. Results from Parry Island, Ontario, yield 8.7 kb from GRAIL as opposed to 9.1 kb using Ganguly and Saxena's model. Formore » GASP, Parry Island assemblages yield 8.4 kb with the authors calibration. Ganguly and Saxena's model gives 5.4 kb using Gasparik's reversals and 8.1 kb using the position of GASP calculated from GRIPS and GRAIL. These corrections allow GRIPS, GRAIL, GASP and FAGS to yield consistent pressures to +/- 0.5 kb in regional metamorphic terranes. Application of their mixing model outside of the fitted range 700-1000 K is not encouraged as extrapolation may yield erroneous results.« less

  5. Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model

    NASA Astrophysics Data System (ADS)

    Kavalis, A. E.

    1983-06-01

    An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.

  6. Formation of parametric images using mixed-effects models: a feasibility study.

    PubMed

    Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh

    2016-03-01

    Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. The Mixing of Regolith on the Moon and Beyond; A Model Refreshed

    NASA Astrophysics Data System (ADS)

    Costello, E.; Ghent, R. R.; Lucey, P. G.

    2017-12-01

    Meteoritic impactors constantly mix the lunar regolith, affecting stratigraphy, the lifetime of rays and other anomalous surface features, and the burial, exposure, and break down of volatiles and rocks. In this work we revisit the pioneering regolith mixing model presented by Gault et al. (1974), with updated assumptions and input parameters. Our updates significantly widen the parameter space and allow us to explore mixing as it is driven by different impactors in different materials (e.g. radar-dark halos and melt ponds). The updated treatment of micrometeorites suggests a very high rate of processing at the immediate lunar surface, with implications for rock breakdown and regolith production on melt ponds. We find that the inclusion of secondary impacts has a very strong effect on the rate and magnitude of mixing at all depths and timescales. Our calculations are in good agreement with the timescale of reworking in the top 2-3 cm of regolith that was predicted by observations of LROC temporal pairs and by the depth profile of 26Al abundance in Apollo drill cores. Further, our calculations with secondaries included are consistent with the depth profile of in situ exposure age calculated from Is/FeO and cosmic track abundance in Apollo deep drill cores down to 50cm. The mixing we predict is also consistent with the erasure of density anomalies, or `cold spots', observed in the top decimeters of regolith by LRO Diviner, and the 1Gyr lifetime of 1-10m thick Copernican rays. This exploration of Moon's surface evolution has profound implications for our understanding of other planetary bodies. We take advantage of this computationally inexpensive analytic model and apply it to describe mixing on a variety of bodies across the solar system; including asteroids, Mercury, and Europa. We use the results of ongoing studies that describe porosity calculations and cratering laws in porous asteroid-like material to explore the reworking rate experienced by an asteroid. On

  8. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  9. Exploring compositional variations on the surface of Mars applying mixing modeling to a telescopic spectral image

    NASA Technical Reports Server (NTRS)

    Merenyi, E.; Miller, J. S.; Singer, R. B.

    1992-01-01

    The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.

  10. Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Sinha, N.; Dash, S. M.

    1988-01-01

    Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.

  11. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  12. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  13. Who mixes with whom among men who have sex with men? Implications for modelling the HIV epidemic in southern India

    PubMed Central

    Mitchell, K.M.; Foss, A.M.; Prudden, H.J.; Mukandavire, Z.; Pickles, M.; Williams, J.R.; Johnson, H.C.; Ramesh, B.M.; Washington, R.; Isac, S.; Rajaram, S.; Phillips, A.E.; Bradley, J.; Alary, M.; Moses, S.; Lowndes, C.M.; Watts, C.H.; Boily, M.-C.; Vickerman, P.

    2014-01-01

    In India, the identity of men who have sex with men (MSM) is closely related to the role taken in anal sex (insertive, receptive or both), but little is known about sexual mixing between identity groups. Both role segregation (taking only the insertive or receptive role) and the extent of assortative (within-group) mixing are known to affect HIV epidemic size in other settings and populations. This study explores how different possible mixing scenarios, consistent with behavioural data collected in Bangalore, south India, affect both the HIV epidemic, and the impact of a targeted intervention. Deterministic models describing HIV transmission between three MSM identity groups (mostly insertive Panthis/Bisexuals, mostly receptive Kothis/Hijras and versatile Double Deckers), were parameterised with behavioural data from Bangalore. We extended previous models of MSM role segregation to allow each of the identity groups to have both insertive and receptive acts, in differing ratios, in line with field data. The models were used to explore four different mixing scenarios ranging from assortative (maximising within-group mixing) to disassortative (minimising within-group mixing). A simple model was used to obtain insights into the relationship between the degree of within-group mixing, R0 and equilibrium HIV prevalence under different mixing scenarios. A more complex, extended version of the model was used to compare the predicted HIV prevalence trends and impact of an HIV intervention when fitted to data from Bangalore. With the simple model, mixing scenarios with increased amounts of assortative (within-group) mixing tended to give rise to a higher R0 and increased the likelihood that an epidemic would occur. When the complex model was fit to HIV prevalence data, large differences in the level of assortative mixing were seen between the fits identified using different mixing scenarios, but little difference was projected in future HIV prevalence trends. An oral pre

  14. A size-composition resolved aerosol model for simulating the dynamics of externally mixed particles: SCRAM (v 1.0)

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Sartelet, K. N.; Seigneur, C.

    2015-06-01

    The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.

  15. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  16. Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Wolf, D. E.

    1984-01-01

    A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.

  17. Multiple component end-member mixing model of dilution: hydrochemical effects of construction water at Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2008-12-01

    The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.

  18. Incorporating concentration dependence in stable isotope mixing models.

    PubMed

    Phillips, Donald L; Koch, Paul L

    2002-01-01

    Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model assumes that the proportional contribution of a source to a mixture is the same for both elements (e.g., C, N). This may be a reasonable assumption if the concentrations are similar among all sources. However, one source is often particularly rich or poor in one element (e.g., N), which logically leads to a proportionate increase or decrease in the contribution of that source to the mixture for that element relative to the other element (e.g., C). We have developed a concentration-weighted linear mixing model, which assumes that for each element, a source's contribution is proportional to the contributed mass times the elemental concentration in that source. The model is outlined for two elements and three sources, but can be generalized to n elements and n+1 sources. Sensitivity analyses for C and N in three sources indicated that varying the N concentration of just one source had large and differing effects on the estimated source contributions of mass, C, and N. The same was true for a case study of bears feeding on salmon, moose, and N-poor plants. In this example, the estimated biomass contribution of salmon from the concentration-weighted model was markedly less than the standard model estimate. Application of the model to a captive feeding study of captive mink fed on salmon, lean beef, and C-rich, N-poor beef fat reproduced very closely the known dietary proportions, whereas the standard model failed to yield a set of positive source proportions. Use of this concentration-weighted model is recommended whenever the elemental concentrations vary substantially among the sources, which may occur in a variety of ecological and geochemical applications of stable isotope

  19. Modelling Kepler red giants in eclipsing binaries: calibrating the mixing-length parameter with asteroseismology

    NASA Astrophysics Data System (ADS)

    Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss

    2018-03-01

    Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.

  20. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  1. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  2. Incompletely Mixed Surface Transient Storage Zones at River Restoration Structures: Modeling Implications

    NASA Astrophysics Data System (ADS)

    Endreny, T. A.; Robinson, J.

    2012-12-01

    River restoration structures, also known as river steering deflectors, are designed to reduce bank shear stress by generating wake zones between the bank and the constricted conveyance region. There is interest in characterizing the surface transient storage (STS) and associated biogeochemical processing in the STS zones around these structures to quantify the ecosystem benefits of river restoration. This research explored how the hydraulics around river restoration structures prohibits application of transient storage models designed for homogenous, completely mixed STS zones. We used slug and constant rate injections of a conservative tracer in a 3rd order river in Onondaga County, NY over the course of five experiments at varying flow regimes. Recovered breakthrough curves spanned a transect including the main channel and wake zone at a j-hook restoration structure. We noted divergent patterns of peak solute concentration and times within the wake zone regardless of transect location within the structure. Analysis reveals an inhomogeneous STS zone which is frequently still loading tracer after the main channel has peaked. The breakthrough curve loading patterns at the restoration structure violated the assumptions of simplified "random walk" 2 zone transient storage models which seek to identify representative STS zones and zone locations. Use of structure-scale Weiner filter based multi-rate mass transfer models to characterize STS zones residence times are similarly dependent on a representative zone location. Each 2 zone model assumes 1 zone is a completely mixed STS zone and the other a completely mixed main channel. Our research reveals limits to simple application of the recently developed 2 zone models, and raises important questions about the measurement scale necessary to identify critical STS properties at restoration sites. An explanation for the incompletely mixed STS zone may be the distinct hydraulics at restoration sites, including a constrained

  3. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  4. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  5. Model for compressible turbulence in hypersonic wall boundary and high-speed mixing layers

    NASA Astrophysics Data System (ADS)

    Bowersox, Rodney D. W.; Schetz, Joseph A.

    1994-07-01

    The most common approach to Navier-Stokes predictions of turbulent flows is based on either the classical Reynolds-or Favre-averaged Navier-Stokes equations or some combination. The main goal of the current work was to numerically assess the effects of the compressible turbulence terms that were experimentaly found to be important. The compressible apparent mass mixing length extension (CAMMLE) model, which was based on measured experimental data, was found to produce accurate predictions of the measured compressible turbulence data for both the wall bounded and free mixing layer. Hence, that model was incorporated into a finite volume Navier-Stokes code.

  6. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  7. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  8. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase

  9. Euler-Lagrange CFD modelling of unconfined gas mixing in anaerobic digestion.

    PubMed

    Dapelo, Davide; Alberini, Federico; Bridgeman, John

    2015-11-15

    A novel Euler-Lagrangian (EL) computational fluid dynamics (CFD) finite volume-based model to simulate the gas mixing of sludge for anaerobic digestion is developed and described. Fluid motion is driven by momentum transfer from bubbles to liquid. Model validation is undertaken by assessing the flow field in a labscale model with particle image velocimetry (PIV). Conclusions are drawn about the upscaling and applicability of the model to full-scale problems, and recommendations are given for optimum application. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  11. Tunable, mixed-resolution modeling using library-based Monte Carlo and graphics processing units

    PubMed Central

    Mamonov, Artem B.; Lettieri, Steven; Ding, Ying; Sarver, Jessica L.; Palli, Rohith; Cunningham, Timothy F.; Saxena, Sunil; Zuckerman, Daniel M.

    2012-01-01

    Building on our recently introduced library-based Monte Carlo (LBMC) approach, we describe a flexible protocol for mixed coarse-grained (CG)/all-atom (AA) simulation of proteins and ligands. In the present implementation of LBMC, protein side chain configurations are pre-calculated and stored in libraries, while bonded interactions along the backbone are treated explicitly. Because the AA side chain coordinates are maintained at minimal run-time cost, arbitrary sites and interaction terms can be turned on to create mixed-resolution models. For example, an AA region of interest such as a binding site can be coupled to a CG model for the rest of the protein. We have additionally developed a hybrid implementation of the generalized Born/surface area (GBSA) implicit solvent model suitable for mixed-resolution models, which in turn was ported to a graphics processing unit (GPU) for faster calculation. The new software was applied to study two systems: (i) the behavior of spin labels on the B1 domain of protein G (GB1) and (ii) docking of randomly initialized estradiol configurations to the ligand binding domain of the estrogen receptor (ERα). The performance of the GPU version of the code was also benchmarked in a number of additional systems. PMID:23162384

  12. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  13. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  14. Models to understand the population-level impact of mixed strain M. tuberculosis infections.

    PubMed

    Sergeev, Rinat; Colijn, Caroline; Cohen, Ted

    2011-07-07

    Over the past decade, numerous studies have identified tuberculosis patients in whom more than one distinct strain of Mycobacterium tuberculosis is present. While it has been shown that these mixed strain infections can reduce the probability of treatment success for individuals simultaneously harboring both drug-sensitive and drug-resistant strains, it is not yet known if and how this phenomenon impacts the long-term dynamics for tuberculosis within communities. Strain-specific differences in immunogenicity and associations with drug resistance suggest that a better understanding of how strains compete within hosts will be necessary to project the effects of mixed strain infections on the future burden of drug-sensitive and drug-resistant tuberculosis. In this paper, we develop a modeling framework that allows us to investigate mechanisms of strain competition within hosts and to assess the long-term effects of such competition on the ecology of strains in a population. These models permit us to systematically evaluate the importance of unknown parameters and to suggest priority areas for future experimental research. Despite the current scarcity of data to inform the values of several model parameters, we are able to draw important qualitative conclusions from this work. We find that mixed strain infections may promote the coexistence of drug-sensitive and drug-resistant strains in two ways. First, mixed strain infections allow a strain with a lower basic reproductive number to persist in a population where it would otherwise be outcompeted if has competitive advantages within a co-infected host. Second, some individuals progressing to phenotypically drug-sensitive tuberculosis from a state of mixed drug-sensitive and drug-resistant infection may retain small subpopulations of drug-resistant bacteria that can flourish once the host is treated with antibiotics. We propose that these types of mixed infections, by increasing the ability of low fitness drug

  15. Modeling Magma Mixing: Evidence from U-series age dating and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Philipp, R.; Cooper, K. M.; Bergantz, G. W.

    2007-12-01

    Magma mixing and recharge is an ubiquitous process in the shallow crust, which can trigger eruption and cause magma hybridization. Phenocrysts in mixed magmas are recorders for magma mixing and can be studied by in- situ techniques and analyses of bulk mineral separates. To better understand if micro-textural and compositional information reflects local or reservoir-scale events, a physical model for gathering and dispersal of crystals is necessary. We present the results of a combined geochemical and fluid dynamical study of magma mixing processes at Volcan Quizapu, Chile; two large (1846/47 AD and 1932 AD) dacitic eruptions from the same vent area were triggered by andesitic recharge magma and show various degrees of magma mixing. Employing a multiphase numerical fluid dynamic model, we simulated a simple mixing process of vesiculated mafic magma intruded into a crystal-bearing silicic reservoir. This unstable condition leads to overturn and mixing. In a second step we use the velocity field obtained to calculate the flow path of 5000 crystals randomly distributed over the entire system. Those particles mimic the phenocryst response to the convective motion. There is little local relative motion between silicate liquid and crystals due to the high viscosity of the melts and the rapid overturn rate of the system. Of special interest is the crystal dispersal and gathering, which is quantified by comparing the distance at the beginning and end of the simulation for all particle pairs that are initially closer than a length scale chosen between 1 and 10 m. At the start of the simulation, both the resident and new intruding (mafic) magmas have a unique particle population. Depending on the Reynolds number (Re) and the chosen characteristic length scale of different phenocryst-pairs, we statistically describe the heterogeneity of crystal populations on the thin section scale. For large Re (approx. 25) and a short characteristic length scale of particle

  16. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  17. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  18. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  19. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Treesearch

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  20. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  1. Selection of latent variables for multiple mixed-outcome models

    PubMed Central

    ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI

    2014-01-01

    Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219

  2. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  3. Dark matter and electroweak phase transition in the mixed scalar dark matter model

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Bian, Ligong

    2018-03-01

    We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.

  4. Mixing methodology, nursing theory and research design for a practice model of district nursing advocacy.

    PubMed

    Reed, Frances M; Fitzgerald, Les; Rae, Melanie

    2016-01-01

    To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.

  5. Modeling Bimolecular Reactive Transport With Mixing-Limitation: Theory and Application to Column Experiments

    NASA Astrophysics Data System (ADS)

    Ginn, T. R.

    2018-01-01

    The challenge of determining mixing extent of solutions undergoing advective-dispersive-diffusive transport is well known. In particular, reaction extent between displacing and displaced solutes depends on mixing at the pore scale, that is, generally smaller than continuum scale quantification that relies on dispersive fluxes. Here a novel mobile-mobile mass transfer approach is developed to distinguish diffusive mixing from dispersive spreading in one-dimensional transport involving small-scale velocity variations with some correlation, such as occurs in hydrodynamic dispersion, in which short-range ballistic transports give rise to dispersed but not mixed segregation zones, termed here ballisticules. When considering transport of a single solution, this approach distinguishes self-diffusive mixing from spreading, and in the case of displacement of one solution by another, each containing a participant reactant of an irreversible bimolecular reaction, this results in time-delayed diffusive mixing of reactants. The approach generates models for both kinetically controlled and equilibrium irreversible reaction cases, while honoring independently measured reaction rates and dispersivities. The mathematical solution for the equilibrium case is a simple analytical expression. The approach is applied to published experimental data on bimolecular reactions for homogeneous porous media under postasymptotic dispersive conditions with good results.

  6. MILP model for integrated balancing and sequencing mixed-model two-sided assembly line with variable launching interval and assignment restrictions

    NASA Astrophysics Data System (ADS)

    Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.

    2017-09-01

    This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.

  7. Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.

    PubMed

    Gunjan, Madhu Ranjan; Raj, Rishi

    2017-07-18

    The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among

  8. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  9. Diesel engine emissions and combustion predictions using advanced mixing models applicable to fuel sprays

    NASA Astrophysics Data System (ADS)

    Abani, Neerav; Reitz, Rolf D.

    2010-09-01

    An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.

  10. A Stochastic Mixing Model for Predicting Emissions in a Direct Injection Diesel Engine.

    DTIC Science & Technology

    1986-09-01

    of chemical reactors. The fundamental concept of these models is coalescence/dis- persion micromixing . C1] Details of this method are provided in Appen...Togby,A.H., "Monte Carlo Methods of Simulating Micromixing in Chemical Reactors", Chemical Engineering Science, Vol.27, p.1 4 97, 1972. 46. Kattan,A...on a molecular level. 2. Micromixing or stream mixing refers to the mixing of particles on a molecular level. Until the coalescence and dispersion

  11. Mixing and solid-liquid mass-transfer rates in a creusot-loire uddeholm vessel: A water model case study

    NASA Astrophysics Data System (ADS)

    Nyoka, M.; Akdogan, G.; Eric, R. H.; Sutcliffe, N.

    2003-12-01

    The process of mixing and solid-liquid mass transfer in a one-fifth scale water model of a 100-ton Creusot-Loire Uddeholm (CLU) converter was investigated. The modified Froude number was used to relate gas flow rates between the model and its protoype. The influences of gas flow rate between 0.010 and 0.018 m3/s and bath height from 0.50 to 0.70 m on mixing time were examined. The results indicated that mixing time decreased with increasing gas flow rate and increased with increasing bath height. The mixing time results were evaluated in terms of specific energy input and the following correlation was proposed for estimating mixing times in the model CLU converter: T mix=1.08Q -1.05 W 0.35, where Q (m3/s) is the gas flow rate and W (tons) is the model bath weight. Solid-liquid mass-transfer rates from benzoic acid specimens immersed in the gas-agitated liquid phase were assessed by a weight loss measurement technique. The calculated mass-transfer coefficients were highest at the bath surface reaching a value of 6.40 × 10-5 m/s in the sprout region. Mass-transfer coefficients and turbulence parameters decreased with depth, reaching minimum values at the bottom of the vessel.

  12. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  13. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    NASA Astrophysics Data System (ADS)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  14. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  15. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  16. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  17. Advantages and pitfalls in the application of mixed-model association methods.

    PubMed

    Yang, Jian; Zaitlen, Noah A; Goddard, Michael E; Visscher, Peter M; Price, Alkes L

    2014-02-01

    Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.

  18. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  19. Continuous synthesis of drug-loaded nanoparticles using microchannel emulsification and numerical modeling: effect of passive mixing

    PubMed Central

    Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel

    2016-01-01

    By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification–evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates. PMID:27524896

  20. Continuous synthesis of drug-loaded nanoparticles using microchannel emulsification and numerical modeling: effect of passive mixing.

    PubMed

    Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel

    2016-01-01

    By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification-evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates.

  1. Using Mixed-Effects Structural Equation Models to Study Student Academic Development.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1992-01-01

    A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…

  2. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Making a mixed-model line more efficient and flexible by introducing a bypass line

    NASA Astrophysics Data System (ADS)

    Matsuura, Sho; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.

  4. Computer modeling movement of biomass in the bioreactors with bubbling mixing

    NASA Astrophysics Data System (ADS)

    Kuschev, L. A.; Suslov, D. Yu; Alifanova, A. I.

    2017-01-01

    Recently in the Russian Federation there is an observation of the development of biogas technologies which are used in organic waste conversion of agricultural enterprises, consequently improving the ecological environment. To intensify the process and effective outstanding performance of the acquisition of biogas the application of systems of mixing of bubbling is used. In the case of bubbling mixing of biomass in the bioreactor two-phase portions consisting of biomass and bubbles of gas are formed. The bioreactor computer model with bubble pipeline has been made in a vertical spiral form forming a cone type turned upside down. With the help of computing program of OpenFVM-Flow, an evaluation experiment was conducted to determine the key technological parameters of process of bubbling mixing and to get a visual picture of biomass flows distribution in the bioreactor. For the experimental bioreactor the following equation of V=190 l, speed level, the biomass circulation, and the time of a single cycle of uax =0,029 m/s; QC =0,00087 m3/s, Δtbm .=159 s. In future, we plan to conduct a series of theoretical and experimental researches into the mixing frequency influence on the biogas acquisition process effectiveness.

  5. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    EPA Science Inventory

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  6. Chandra Observations and Models of the Mixed Morphology Supernova Remnant W44: Global Trends

    NASA Technical Reports Server (NTRS)

    Shelton, R. L.; Kuntz, K. D.; Petre, R.

    2004-01-01

    We report on the Chandra observations of the archetypical mixed morphology (or thermal composite) supernova remnant, W44. As with other mixed morphology remnants, W44's projected center is bright in thermal X-rays. It has an obvious radio shell, but no discernable X-ray shell. In addition, X-ray bright knots dot W44's image. The spectral analysis of the Chandra data show that the remnant s hot, bright projected center is metal-rich and that the bright knots are regions of comparatively elevated elemental abundances. Neon is among the affected elements, suggesting that ejecta contributes to the abundance trends. Furthermore, some of the emitting iron atoms appear to be underionized with respect to the other ions, providing the first potential X-ray evidence for dust destruction in a supernova remnant. We use the Chandra data to test the following explanations for W44's X-ray bright center: 1.) entropy mixing due to bulk mixing or thermal conduction, 2.) evaporation of swept up clouds, and 3.) a metallicity gradient, possibly due to dust destruction and ejecta enrichment. In these tests, we assume that the remnant has evolved beyond the adiabatic evolutionary stage, which explains the X-ray dimness of the shell. The entropy mixed model spectrum was tested against the Chandra spectrum for the remnant's projected center and found to be a good match. The evaporating clouds model was constrained by the finding that the ionization parameters of the bright knots are similar to those of the surrounding regions. While both the entropy mixed and the evaporating clouds models are known to predict centrally bright X-ray morphologies, their predictions fall short of the observed brightness gradient. The resulting brightness gap can be largely filled in by emission from the extra metals in and near the remnant's projected center. The preponderance of evidence (including that drawn from other studies) suggests that W44's remarkable morphology can be attributed to dust destruction

  7. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  8. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  9. Statistical quality assessment criteria for a linear mixing model with elliptical t-distribution errors

    NASA Astrophysics Data System (ADS)

    Manolakis, Dimitris G.

    2004-10-01

    The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.

  10. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  11. Bias and uncertainty of δ13CO2 isotopic mixing models

    Treesearch

    Zachary E. Kayler; Lisa Ganio; Mark Hauck; Thomas G. Pypker; Elizabeth W. Sulzman; Alan C. Mix; Barbara J. Bond

    2009-01-01

    The goal of this study was to evaluate how factorial combinations of two mixing models and two regression approaches (Keeling-OLS, Miller—Tans-OLS, Keeling-GMR, Miller—Tans-GMR) compare in small [CO2] range versus large[CO2] range regimes, with different combinations of...

  12. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  13. A Turbulence model taking into account the longitudinal flow inhomogeneity in mixing layers and jets

    NASA Astrophysics Data System (ADS)

    Troshin, A. I.

    2017-06-01

    The problem of potential core length overestimation of subsonic free jets by Reynolds-averaged Navier-Stokes (RANS) based turbulence models is addressed. It is shown that the issue is due to the incorrect velocity profile modeling of the jet mixing layers. An additional source term in ω equation is proposed which takes into account the effect of longitudinal flow inhomogeneity on turbulence in mixing layers. Computations confirm that the modified Speziale-Sarkar-Gatski/Launder- Reece-Rodi-omega (SSG/LRR-ω) turbulence model correctly predicts the mean velocity profiles in both initial and far-field regions of subsonic free plane jet as well as the centerline velocity decay rate.

  14. Modeling of surface temperature effects on mixed material migration in NSTX-U

    NASA Astrophysics Data System (ADS)

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    2016-10-01

    NSTX-U will initially operate with graphite walls, periodically coated with thin lithium films to improve plasma performance. However, the spatial and temporal evolution of these films during and after plasma exposure is poorly understood. The WallDYN global mixed-material surface evolution model has recently been applied to the NSTX-U geometry to simulate the evolution of poloidally inhomogenous mixed C/Li/O plasma-facing surfaces. The WallDYN model couples local erosion and deposition processes with plasma impurity transport in a non-iterative, self-consistent manner that maintains overall material balance. Temperature-dependent sputtering of lithium has been added to WallDYN, utilizing an adatom sputtering model developed from test stand experimental data. Additionally, a simplified temperature-dependent diffusion model has been added to WallDYN so as to capture the intercalation of lithium into a graphite bulk matrix. The sensitivity of global lithium migration patterns to changes in surface temperature magnitude and distribution will be examined. The effect of intra-discharge increases in surface temperature due to plasma heating, such as those observed during NSTX Liquid Lithium Divertor experiments, will also be examined. Work supported by US DOE contract DE-AC02-09CH11466.

  15. Improving the mixing performance of side channel type micromixers using an optimal voltage control model.

    PubMed

    Wu, Chien-Hsien; Yang, Ruey-Jen

    2006-06-01

    Electroosmotic flow in microchannels is restricted to low Reynolds number regimes. Since the inertia forces are extremely weak in such regimes, turbulent conditions do not readily develop, and hence species mixing occurs primarily as a result of diffusion. Consequently, achieving a thorough species mixing generally relies upon the use of extended mixing channels. This paper aims to improve the mixing performance of conventional side channel type micromixers by specifying the optimal driving voltages to be applied to each channel. In the proposed approach, the driving voltages are identified by constructing a simple theoretical scheme based on a 'flow-rate-ratio' model and Kirchhoff's law. The numerical and experimental results confirm that the optimal voltage control approach provides a better mixing performance than the use of a single driving voltage gradient.

  16. Geochemical modeling of magma mixing and magma reservoir volumes during early episodes of Kīlauea Volcano's Pu`u `Ō`ō eruption

    NASA Astrophysics Data System (ADS)

    Shamberger, Patrick J.; Garcia, Michael O.

    2007-02-01

    Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.

  17. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  18. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  19. GUT and flavor models for neutrino masses and mixing

    NASA Astrophysics Data System (ADS)

    Meloni, Davide

    2017-10-01

    In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.

  20. Groundwater contamination from an inactive uranium mill tailings pile. 2. Application of a dynamic mixing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narashimhan, T.N.; White, A.F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. Inmore » the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.« less

  1. Groundwater contamination from an inactive uranium mill tailings pile: 2. Application of a dynamic mixing model

    NASA Astrophysics Data System (ADS)

    Narasimhan, T. N.; White, A. F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series [White et al., 1984] we presented field data as well as an interpretation based on a static mixing model. As an upper bound, we estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work we present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNAmic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.

  2. Neutrino mixing in a left-right model

    NASA Astrophysics Data System (ADS)

    Martins Simões, J. A.; Ponciano, J. A.

    We study the mixing among different generations of massive neutrino fields in a model can accommodate a consistent pattern for neutral fermion masses as well as neutrino oscillations. The left and right sectors can be connected by a new neutral current. PACS: 12.60.-i, 14.60.St, 14.60.Pq

  3. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  4. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  5. Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.

    2003-08-01

    We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less

  6. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  7. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  9. Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.

    PubMed

    Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei

    2015-01-01

    Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.

  10. Study of a mixed dispersal population dynamics model

    DOE PAGES

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...

    2016-08-27

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less

  11. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  12. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omotowa, O. A.; Skifton, R.; Tokuhiro, A.

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-{epsilon} turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental datamore » reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) {Delta}T{sub hc}= 5 deg. C, 10 deg. C and velocity ratios U{sub c}/U{sub h} = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)« less

  13. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  14. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    PubMed

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  15. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    PubMed Central

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  16. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  17. MIXING STUDY FOR JT-71/72 TANKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.

    2013-11-26

    All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less

  18. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the

  19. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  20. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  1. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  2. Effective temperatures of red giants in the APOKASC catalogue and the mixing length calibration in stellar models

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.

    2018-04-01

    Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.

  3. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  4. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  5. Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Jerden, James L.; Ebert, William L.

    The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM

  6. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  7. Developing approaches for linear mixed modeling in landscape genetics through landscape-directed dispersal simulations

    USGS Publications Warehouse

    Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.

    2017-01-01

    Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.

  8. Experimental and mathematical model of the interactions in the mixed culture of links in the "producer-consumer" cycle

    NASA Astrophysics Data System (ADS)

    Pisman, T. I.; Galayda, Ya. V.

    The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis

  9. Horizontal heat fluxes over complex terrain computed using a simple mixed-layer model and a numerical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Fujio; Kuwagata, Tuneo

    1995-02-01

    The thermally induced local circulation over a periodic valley is simulated by a two-dimensional numerical model that does-not include condensational processes. During the daytime of a clear, calm day, heat is transported from the mountainous region to the valley area by anabatic wind and its return flow. The specific humidity is, however, transported in an inverse manner. The horizontal exchange rate of sensible heat has a horizontal scale similarity, as long as the horizontal scale is less than a critical width of about 100 km. The sensible heat accumulated in an atmospheric column over an arbitrary point can be estimatedmore » by a simple model termed the uniform mixed-layer model (UML). The model assumes that the potential temperature is both vertically and horizontally uniform in the mixed layer, even over the complex terrain. The UML model is valid only when the horizontal scale of the topography is less than the critical width and the maximum difference in the elevation of the topography is less than about 1500 m. Latent heat is accumulated over the mountainous region while the atmosphere becomes dry over the valley area. When the horizontal scale is close to the critical width, the largest amount of humidity is accumulated during the late afternoon over the mountainous region. 18 refs., 15 figs., 1 tab.« less

  10. Statistical modelling of growth using a mixed model with orthogonal polynomials.

    PubMed

    Suchocki, T; Szyda, J

    2011-02-01

    In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.

  11. Scale model performance test investigation of mixed flow exhaust systems for an energy efficient engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1983-01-01

    As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.

  12. Unlearning of Mixed States in the Hopfield Model —Extensive Loading Case—

    NASA Astrophysics Data System (ADS)

    Hayashi, Kao; Hashimoto, Chinami; Kimoto, Tomoyuki; Uezu, Tatsuya

    2018-05-01

    We study the unlearning of mixed states in the Hopfield model for the extensive loading case. Firstly, we focus on case I, where several embedded patterns are correlated with each other, whereas the rest are uncorrelated. Secondly, we study case II, where patterns are divided into clusters in such a way that patterns in any cluster are correlated but those in two different clusters are not correlated. By using the replica method, we derive the saddle point equations for order parameters under the ansatz of replica symmetry. The same equations are also derived by self-consistent signal-to-noise analysis in case I. In both cases I and II, we find that when the correlation between patterns is large, the network loses its ability to retrieve the embedded patterns and, depending on the parameters, a confused memory, which is a mixed state and/or spin glass state, emerges. By unlearning the mixed state, the network acquires the ability to retrieve the embedded patterns again in some parameter regions. We find that to delete the mixed state and to retrieve the embedded patterns, the coefficient of unlearning should be chosen appropriately. We perform Markov chain Monte Carlo simulations and find that the simulation and theoretical results agree reasonably well, except for the spin glass solution in a parameter region due to the replica symmetry breaking. Furthermore, we find that the existence of many correlated clusters reduces the stabilities of both embedded patterns and mixed states.

  13. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  14. Mixed-order phase transition in a minimal, diffusion-based spin model.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  15. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  16. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    NASA Astrophysics Data System (ADS)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes

  17. Modeling Macro- and Micro-Scale Turbulent Mixing and Chemistry in Engine Exhaust Plumes

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1998-01-01

    Simulation of turbulent mixing and chemical processes in the near-field plume and plume-vortex regimes has been successfully carried out recently using a reduced gas phase kinetics mechanism which substantially decreased the computational cost. A detailed mechanism including gas phase HOx, NOx, and SOx chemistry between the aircraft exhaust and the ambient air in near-field aircraft plumes is compiled. A reduced mechanism capturing the major chemical pathways is developed. Predictions by the reduced mechanism are found to be in good agreement with those by the detailed mechanism. With the reduced chemistry, the computer CPU time is saved by a factor of more than 3.5 for the near-field plume modeling. Distributions of major chemical species are obtained and analyzed. The computed sensitivities of major species with respect to reaction step are deduced for identification of the dominant gas phase kinetic reaction pathways in the jet plume. Both the near field plume and the plume-vortex regimes were investigated using advanced mixing models. In the near field, a stand-alone mixing model was used to investigate the impact of turbulent mixing on the micro- and macro-scale mixing processes using a reduced reaction kinetics model. The plume-vortex regime was simulated using a large-eddy simulation model. Vortex plume behind Boeing 737 and 747 aircraft was simulated along with relevant kinetics. Many features of the computed flow field show reasonable agreement with data. The entrainment of the engine plumes into the wing tip vortices and also the partial detrainment of the plume were numerically captured. The impact of fluid mechanics on the chemical processes was also studied. Results show that there are significant differences between spatial and temporal simulations especially in the predicted SO3 concentrations. This has important implications for the prediction of sulfuric acid aerosols in the wake and may partly explain the discrepancy between past numerical studies

  18. Linear models for sound from supersonic reacting mixing layers

    NASA Astrophysics Data System (ADS)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  19. MANOVA vs nonlinear mixed effects modeling: The comparison of growth patterns of female and male quail

    NASA Astrophysics Data System (ADS)

    Gürcan, Eser Kemal

    2017-04-01

    The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.

  20. A mixed-effects height-diameter model for cottonwood in the Mississippi Delta

    Treesearch

    Curtis L. VanderSchaaf; H. Christoph Stuhlinger

    2012-01-01

    Eastern cottonwood (Populus deltoides Bartr. ex Marsh.) has been artificially regenerated throughout the Mississippi Delta region because of its fast growth and is being considered for biofuel production.This paper presents a mixed-effects height-diameter model for cottonwood in the Mississippi Delta region. After obtaining height-diameter...

  1. SOURCE AGGREGATION IN STABLE ISOTOPE MIXING MODELS: LUMP IT OR LEAVE IT?

    EPA Science Inventory

    A common situation when stable isotope mixing models are used to estimate source contributions to a mixture is that there are too many sources to allow a unique solution. To resolve this problem one option is to combine sources with similar signatures such that the number of sou...

  2. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  3. The prediction of sea-surface temperature variations by means of an advective mixed-layer ocean model

    NASA Technical Reports Server (NTRS)

    Atlas, R. M.

    1976-01-01

    An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.

  4. Estimates of lake trout (Salvelinus namaycush) diet in Lake Ontario using two and three isotope mixing models

    USGS Publications Warehouse

    Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.

    2016-01-01

    Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.

  5. Recycle of mixed automotive plastics: A model study

    NASA Astrophysics Data System (ADS)

    Woramongconchai, Somsak

    decreased with increased twin-screw extrusion temperature. The flexural modulus of the recycled mixed automotive plastics expected in 2003 was higher than the 1980s and 1990 recycle. Flexural strength effects were not large enough for serious consideration, but were more dominant when compared to those in the 1980s and 1990s. Impact strengths at 20-30 J/m were the lowest value compared to the 1980s and 1990s mixed automotive recycle. Torque rheometry, dynamic mechanical analysis and optical and electron microscopy agreed with each other on the characterization of the processability and morphology of the blends. LLDPE and HDPE were miscible while PP was partially miscible with polyethylene. ABS and nylon-6 were immiscible with the polyolefins, but partially miscible with each other. As expected, the polyurethane foam was immiscible with the other components. The minor components of the model recycle of mixed automotive materials were probably partially miscible with ABS/nylon-6, but there were multiple and unresolved phases in the major blends.

  6. Bursting patterns and mixed-mode oscillations in reduced Purkinje model

    NASA Astrophysics Data System (ADS)

    Zhan, Feibiao; Liu, Shenquan; Wang, Jing; Lu, Bo

    2018-02-01

    Bursting discharge is a ubiquitous behavior in neurons, and abundant bursting patterns imply many physiological information. There exists a closely potential link between bifurcation phenomenon and the number of spikes per burst as well as mixed-mode oscillations (MMOs). In this paper, we have mainly explored the dynamical behavior of the reduced Purkinje cell and the existence of MMOs. First, we adopted the codimension-one bifurcation to illustrate the generation mechanism of bursting in the reduced Purkinje cell model via slow-fast dynamics analysis and demonstrate the process of spike-adding. Furthermore, we have computed the first Lyapunov coefficient of Hopf bifurcation to determine whether it is subcritical or supercritical and depicted the diagrams of inter-spike intervals (ISIs) to examine the chaos. Moreover, the bifurcation diagram near the cusp point is obtained by making the codimension-two bifurcation analysis for the fast subsystem. Finally, we have a discussion on mixed-mode oscillations and it is further investigated using the characteristic index that is Devil’s staircase.

  7. Line-Mixing Relaxation Matrix model for spectroscopic and radiative transfer studies

    NASA Astrophysics Data System (ADS)

    Mendaza, Teresa; Martin-Torres, Javier

    2016-04-01

    We present a generic model to compute the Relaxation Matrix easily adaptable to any molecule and type of spectroscopic lines or bands in non-reactive molecule collisions regimes. It also provides the dipole moment of every transition and level population of the selected molecule. The model is based on the Energy-Corrected Sudden (ECS) approximation/theory introduced by DePristo (1980), and on previous Relaxation Matrix studies for the interaction between molecular ro-vibrational levels (Ben-Rueven, 1966), atoms (Rosenkranz, 1975), linear molecules (Strow and Reuter, 1994; Niro, Boulet and Hartmann, 2004), and symmetric but not linear molecules (Tran et al., 2006). The model is open source, and it is user-friendly. To the point that the user only has to select the wished molecule and vibrational band to perform the calculations. It reads the needed spectroscopic data from the HIgh-resolution TRANsmission molecular absorption (HITRAN) (Rothman et al., 2013) and ExoMol (Tennyson and Yurchenko, 2012). In this work we present an example of the calculations with our model for the case of the 2ν3 band of methane (CH4), and a comparison with a previous work (Tran et al., 2010). The data produced by our model can be used to characterise the line-mixing effects on ro-vibrational lines of the infrared emitters of any atmosphere, to calculate accurate absorption spectra, that are needed in the interpretation of atmospheric spectra, radiative transfer modelling and General Circulation Models (GCM). References [1] A.E. DePristo, Collisional influence on vibration-rotation spectral line shapes: A scaling theoretical analysis and simplification, J. Chem. Phys. 73(5), 1980. [2] A. Ben-Reuven, Impact broadening of microwave spectra, Phys. Rev. 145(1), 7-22, 1966. [3] P.W. Rosenkranz, Shape of the 5 mm Oxygen Band in the Atmosphere, IEEE Transactions on Antennas and Propagation, vol. AP-23, no. 4, pp. 498-506, 1975. [4] Strow, L.L., D.D. Tobin, and S.E. Hannon, A compilation of

  8. A Model of High-Frequency Self-Mixing in Double-Barrier Rectifier

    NASA Astrophysics Data System (ADS)

    Palma, Fabrizio; Rao, R.

    2018-03-01

    In this paper, a new model of the frequency dependence of the double-barrier THz rectifier is presented. The new structure is of interest because it can be realized by CMOS image sensor technology. Its application in a complex field such as that of THz receivers requires the availability of an analytical model, which is reliable and able to highlight the dependence on the parameters of the physical structure. The model is based on the hydrodynamic semiconductor equations, solved in the small signal approximation. The model depicts the mechanisms of the THz modulation of the charge in the depleted regions of the double-barrier device and explains the self-mixing process, the frequency dependence, and the detection capability of the structure. The model thus substantially improves the analytical models of the THz rectification available in literature, mainly based on lamped equivalent circuits.

  9. Convection Enhances Mixing in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Sohail, Taimoor; Gayen, Bishakhdatta; Hogg, Andrew McC.

    2018-05-01

    Mixing efficiency is a measure of the energy lost to mixing compared to that lost to viscous dissipation. In a turbulent stratified fluid the mixing efficiency is often assumed constant at η = 0.2, whereas with convection it takes values closer to 1. The value of mixing efficiency when both stratified shear flow and buoyancy-driven convection are active remains uncertain. We use a series of numerical simulations to determine the mixing efficiency in an idealized Southern Ocean model. The model is energetically closed and fully resolves convection and turbulence such that mixing efficiency can be diagnosed. Mixing efficiency decreases with increasing wind stress but is enhanced by turbulent convection and by large thermal gradients in regions with a strongly stratified thermocline. Using scaling theory and the model results, we predict an overall mixing efficiency for the Southern Ocean that is significantly greater than 0.2 while emphasizing that mixing efficiency is not constant.

  10. Mixing of multiple jets with a confined subsonic crossflow. Summary of NASA-supported experiments and modeling

    NASA Technical Reports Server (NTRS)

    Holdeman, James D.

    1991-01-01

    Experimental and computational results on the mixing of single, double, and opposed rows of jets with an isothermal or variable temperature mainstream in a confined subsonic crossflow are summarized. The studies were performed to investigate flow and geometric variations typical of the complex 3-D flowfield in the dilution zone of combustion chambers in gas turbine engines. The principal observations from the experiments were that the momentum-flux ratio was the most significant flow variable, and that temperature distributions were similar (independent of orifice diameter) when the orifice spacing and the square-root of the momentum-flux ratio were inversely proportional. The experiments and empirical model for the mixing of a single row of jets from round holes were extended to include several variations typical of gas turbine combustors. Combinations of flow and geometry that gave optimum mixing were identified from the experimental results. Based on results of calculations made with a 3-D numerical model, the empirical model was further extended to model the effects of curvature and convergence. The principle conclusions from this study were that the orifice spacing and momentum-flux relationships were the same as observed previously in a straight duct, but the jet structure was significantly different for jets injected from the inner wall wall of a turn than for those injected from the outer wall. Also, curvature in the axial direction caused a drift of the jet trajectories toward the inner wall, but the mixing in a turning and converging channel did not seem to be inhibited by the convergence, independent of whether the convergence was radial or circumferential. The calculated jet penetration and mixing in an annulus were similar to those in a rectangular duct when the orifice spacing was specified at the radius dividing the annulus into equal areas.

  11. Experimental and CFD modeling of fluid mixing in sinusoidal microchannels with different phase shift between side walls

    NASA Astrophysics Data System (ADS)

    Khosravi Parsa, Mohsen; Hormozi, Faramarz

    2014-06-01

    In the present work, a passive model of a micromixer with sinusoidal side walls, a convergent-divergent cross section and a T-shape entrance was experimentally fabricated and modeled. The main aim of this modeling was to conduct a study on the Dean and separation vortices created inside the sinusoidal microchannels with a convergent-divergent cross section. To fabricate the microchannels, CO2 laser micromachining was utilized and the fluid mixing pattern is observed using a digital microscope imaging system. Also, computational fluid dynamics was applied with the finite element method to solve Navier-Stokes equations and the diffusion-convection mode in inlet Reynolds numbers of 0.2-75. Numerically obtained results were in reasonable agreement with experimental data. According to the previous studies, phase shift and wavelength of side walls are important parameters in designing sinusoidal microchannels. An increase of phase shift between side walls of microchannels leads the cross section being convergent-divergent. Results also show that at an inlet Reynolds number of <20 the molecular diffusion is the dominant mixing factor and the mixing index extent is nearly identical in all designs. For higher inlet Reynolds numbers (>20), secondary flow is the main factor of mixing. Noticeably, mixing index drastically depends on phase shift (ϕ) and wavelength of side walls (λ) such that the best mixing can be observed in ϕ = 3π/4 and at a wavelength to amplitude ratio of 3.3. Likewise, the maximum pressure drop is reported at ϕ = π. Therefore, the sinusoidal microchannel with phase shifts between π/2 and 3π/4 is the best microchannel for biological and chemical analysis, for which a mixing index value higher than 90% and a pressure drop less than 12 kPa is reported.

  12. Cohesive and mixed sediment in the Regional Ocean Modeling System (ROMS v3.6) implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST r1234)

    NASA Astrophysics Data System (ADS)

    Sherwood, Christopher R.; Aretxabaleta, Alfredo L.; Harris, Courtney K.; Rinehimer, J. Paul; Verney, Romaric; Ferré, Bénédicte

    2018-05-01

    We describe and demonstrate algorithms for treating cohesive and mixed sediment that have been added to the Regional Ocean Modeling System (ROMS version 3.6), as implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST Subversion repository revision 1234). These include the following: floc dynamics (aggregation and disaggregation in the water column); changes in floc characteristics in the seabed; erosion and deposition of cohesive and mixed (combination of cohesive and non-cohesive) sediment; and biodiffusive mixing of bed sediment. These routines supplement existing non-cohesive sediment modules, thereby increasing our ability to model fine-grained and mixed-sediment environments. Additionally, we describe changes to the sediment bed layering scheme that improve the fidelity of the modeled stratigraphic record. Finally, we provide examples of these modules implemented in idealized test cases and a realistic application.

  13. Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models

    PubMed Central

    Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya

    2016-01-01

    Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182

  14. Testing of a Shrouded, Short Mixing Stack Gas Eductor Model Using High Temperature Primary Flow.

    DTIC Science & Technology

    1982-10-01

    problem but of less significance than the heated surfaces of shipboard structure. Various types of electronic equipments and sensors carried by a combatant...here was to validate current procedures by comparison with previous data it was not considered essential to rein- stall these sensors or duplicate...sec) 205 tABLE XIX Mixing Stack Temperatura Data, Model B Thermocouple Axial Mixing Stack Temperature _ mbjr Posii--- .. (I IF) . Uptake 180 850 950

  15. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  16. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  17. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    PubMed

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate

  18. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    PubMed Central

    Foulley, Jean-Louis; Van Dyk, David A

    2000-01-01

    This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399

  19. Modeling of mixing processes: Fluids, particulates, and powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ottino, J.M.; Hansen, S.

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is foundmore » that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.« less

  20. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  1. Updated Bs-mixing constraints on new physics models for b →s ℓ+ℓ- anomalies

    NASA Astrophysics Data System (ADS)

    Di Luzio, Luca; Kirk, Matthew; Lenz, Alexander

    2018-05-01

    Many new physics models that explain the intriguing anomalies in the b -quark flavor sector are severely constrained by Bs mixing, for which the Standard Model prediction and experiment agreed well until recently. The most recent Flavour Lattice Averaging Group (FLAG) average of lattice results for the nonperturbative matrix elements points, however, in the direction of a small discrepancy in this observable Cabibbo-Kobayashi-Maskawa (CKM). Using up-to-date inputs from standard sources such as PDG, FLAG and one of the two leading CKM fitting groups to determine Δ MsSM, we find a severe reduction of the allowed parameter space of Z' and leptoquark models explaining the B anomalies. Remarkably, in the former case the upper bound on the Z' mass approaches dangerously close to the energy scales already probed by the LHC. We finally identify some model-building directions in order to alleviate the tension with Bs mixing.

  2. Deduction of initial strategy distributions of agents in mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling

    2006-11-01

    This paper reports the effort of deducing the initial strategy distributions (ISDs) of agents in mix-game models that is used to predict a real financial time series generated from a target financial market. Using mix-games to predict Shanghai Index, we find that the time series of prediction accurate rates is sensitive to the ISDs of agents in group 2 who play a minority game, but less sensitive to the ISDs of agents in group 1 who play a majority game. And agents in group 2 tend to cluster in full strategy space (FSS) if the real financial time series has obvious tendency (upward or downward), otherwise they tend to scatter in FSS. We also find that the ISDs and the number of agents in group 1 influence the level of prediction accurate rates. Finally, this paper gives suggestion about further research.

  3. Converting isotope ratios to diet composition - the use of mixing models - June 2010

    EPA Science Inventory

    One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...

  4. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  5. Models of Plumes: Their Flow, Their Geometric Spreading, and Their Mixing with Interplume Flow

    NASA Technical Reports Server (NTRS)

    Suess, Steven T.

    1998-01-01

    There are two types of plume flow models: (1) 1D models using ad hoc spreading functions, f(r); (2) MagnetoHydroDynamics (MHD) models. 1D models can be multifluid, time dependent, and incorporate very general descriptions of the energetics. They confirm empirical results that plume flow is slow relative to requirements for high speed wind. But, no published 1 D model incorporates the rapid local spreading at the base (fl(r)) which has an important effect on mass flux. The one published MHD model is isothermal, but confirms that if b=8*pi*p/absolute value(B)2<models provide a potent method of calculating fg(r). Unambiguous plume signatures have not yet been found in the solar wind. This is probably due to strong mixing of plume and interplume flows near the Sun. We describe a physical source for strong mixing due to the observed flows being unstable to shear instabilities that lead to rapid disruption.

  6. Hawaii Ocean Mixing Experiment: Program Summary

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    It is becoming apparent that insufficient mixing occurs in the pelagic ocean to maintain the large scale thermohaline circulation. Observed mixing rates fall a factor of ten short of classical indices such as Munk's "Abyssal Recipe." The growing suspicion is that most of the mixing in the sea occurs near topography. Exciting recent observations by Polzin et al., among others, fuel this speculation. If topographic mixing is indeed important, it must be acknowledged that its geographic distribution, both laterally and vertically, is presently unknown. The vertical distribution of mixing plays a critical role in the Stommel Arons model of the ocean interior circulation. In recent numerical studies, Samelson demonstrates the extreme sensitivity of flow in the abyssal ocean to the spatial distribution of mixing. We propose to study the topographic mixing problem through an integrated program of modeling and observation. We focus on tidally forced mixing as the global energetics of this process have received (and are receiving) considerable study. Also, the well defined frequency of the forcing and the unique geometry of tidal scattering serve to focus the experiment design. The Hawaiian Ridge is selected as a study site. Strong interaction between the barotropic tide and the Ridge is known to take place. The goals of the Hawaiian Ocean Mixing Experiment (HOME) are to quantify the rate of tidal energy loss to mixing at the Ridge and to identify the mechanisms by which energy is lost and mixing generated. We are challenged to develop a sufficiently comprehensive picture that results can be generalized from Hawaii to the global ocean. To achieve these goals, investigators from five institutions have designed HOME, a program of historic data analysis, modeling and field observation. The Analysis and Modeling efforts support the design of the field experiments. As the program progresses, a global model of the barotropic (depth independent) tide, and two models of the

  7. Physical Interpretation of Mixing Diagrams

    NASA Astrophysics Data System (ADS)

    Khain, Alexander; Pinsky, Mark; Magaritz-Ronen, L.

    2018-01-01

    Type of mixing at cloud edges is often determined by means of mixing diagrams showing the dependence of normalized cube of the mean volume radius on the dilution level. The mixing diagrams correspond to the final equilibrium state of mixing between two air volumes. While interpreting in situ measurements, scattering diagrams are plotted in which normalized droplet concentration is used instead of dilution level. Utilization of such scattering diagrams for interpretation of in situ observations faces significant difficulties and often leads to misinterpretation of the mixing process and to uncertain conclusions concerning the mixing type. In this study we analyze the scattering diagrams obtained by means of a Lagrangian-Eulerian model of a stratocumulus cloud. The model consists of 2,000 interacting Largangian parcels which mix with their neighbors during their motion in the atmospheric boundary layer. In the diagram, each parcel is denoted by a point. Changes of microphysical parameters of the parcel are represented by movements of the point in the scattering diagram. The method of plotting the scattering diagrams using the model is in many aspects similar to that used in in situ measurements. It is shown that a scattering diagram shows snapshots of a transient mixing process. The location of points in the scattering diagrams reflects largely the history and the origin of air parcels. Location of points on scattering diagram characterizes intensity of entrainment, and different parameters of droplet size distributions (DSDs) like concentration, mean volume (or effective) radius, and DSD width.

  8. Mixed valent metals

    NASA Astrophysics Data System (ADS)

    Riseborough, P. S.; Lawrence, J. M.

    2016-08-01

    We review the theory of mixed-valent metals and make comparison with experiments. A single-impurity description of the mixed-valent state is discussed alongside the description of the nearly-integer valent or Kondo limit. The degeneracy N of the f-shell plays an important role in the description of the low-temperature Fermi-liquid state. In particular, for large N, there is a rapid cross-over between the mixed-valent and the Kondo limit when the number of f electrons is changed. We discuss the limitations on the application of the single-impurity description to concentrated compounds such as those caused by the saturation of the Kondo effect and those due to the presence of magnetic interactions between the impurities. This discussion is followed by a description of a periodic lattice of mixed-valent ions, including the role of the degeneracy N. The article concludes with a comparison of theory and experiment. Topics covered include the single-impurity Anderson model, Luttinger’s theorem, the Friedel sum rule, the Schrieffer-Wolff transformation, the single-impurity Kondo model, Kondo screening, the Wilson ratio, local Fermi-liquids, Fermi-liquid sum rules, the Noziéres exhaustion principle, Doniach’s diagram, the Anderson lattice model, the Slave-Boson method, etc.

  9. Mixed valent metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riseborough, P. S.; Lawrence, Jon M.

    Here, we review the theory of mixed-valent metals and make comparison with experiments. A single-impurity description of the mixed-valent state is discussed alongside the description of the nearly-integer valent or Kondo limit. The degeneracy N of the f-shell plays an important role in the description of the low-temperature Fermi-liquid state. In particular, for large N, there is a rapid cross-over between the mixed-valent and the Kondo limit when the number of f electrons is changed. We discuss the limitations on the application of the single-impurity description to concentrated compounds such as those caused by the saturation of the Kondo effectmore » and those due to the presence of magnetic interactions between the impurities. This discussion is followed by a description of a periodic lattice of mixed-valent ions, including the role of the degeneracy N. The article concludes with a comparison of theory and experiment. Topics covered include the single-impurity Anderson model, Luttinger's theorem, the Friedel sum rule, the Schrieffer–Wolff transformation, the single-impurity Kondo model, Kondo screening, the Wilson ratio, local Fermi-liquids, Fermi-liquid sum rules, the Nozieres exhaustion principle, Doniach's diagram, the Anderson lattice model, the Slave-Boson method, etc.« less

  10. Mixed valent metals

    DOE PAGES

    Riseborough, P. S.; Lawrence, Jon M.

    2016-07-04

    Here, we review the theory of mixed-valent metals and make comparison with experiments. A single-impurity description of the mixed-valent state is discussed alongside the description of the nearly-integer valent or Kondo limit. The degeneracy N of the f-shell plays an important role in the description of the low-temperature Fermi-liquid state. In particular, for large N, there is a rapid cross-over between the mixed-valent and the Kondo limit when the number of f electrons is changed. We discuss the limitations on the application of the single-impurity description to concentrated compounds such as those caused by the saturation of the Kondo effectmore » and those due to the presence of magnetic interactions between the impurities. This discussion is followed by a description of a periodic lattice of mixed-valent ions, including the role of the degeneracy N. The article concludes with a comparison of theory and experiment. Topics covered include the single-impurity Anderson model, Luttinger's theorem, the Friedel sum rule, the Schrieffer–Wolff transformation, the single-impurity Kondo model, Kondo screening, the Wilson ratio, local Fermi-liquids, Fermi-liquid sum rules, the Nozieres exhaustion principle, Doniach's diagram, the Anderson lattice model, the Slave-Boson method, etc.« less

  11. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...

  12. Estimating Daily Evapotranspiration Based on A Model of Evapotranspiration Fraction (EF) for Mixed Pixels

    NASA Astrophysics Data System (ADS)

    Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.

    2017-12-01

    Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1

  13. Effects of Model Resolution and Ocean Mixing on Forced Ice-Ocean Physical and Biogeochemical Simulations Using Global and Regional System Models

    NASA Astrophysics Data System (ADS)

    Jin, Meibing; Deal, Clara; Maslowski, Wieslaw; Matrai, Patricia; Roberts, Andrew; Osinski, Robert; Lee, Younjoo J.; Frants, Marina; Elliott, Scott; Jeffery, Nicole; Hunke, Elizabeth; Wang, Shanlin

    2018-01-01

    The current coarse-resolution global Community Earth System Model (CESM) can reproduce major and large-scale patterns but is still missing some key biogeochemical features in the Arctic Ocean, e.g., low surface nutrients in the Canada Basin. We incorporated the CESM Version 1 ocean biogeochemical code into the Regional Arctic System Model (RASM) and coupled it with a sea-ice algal module to investigate model limitations. Four ice-ocean hindcast cases are compared with various observations: two in a global 1° (40˜60 km in the Arctic) grid: G1deg and G1deg-OLD with/without new sea-ice processes incorporated; two on RASM's 1/12° (˜9 km) grid R9km and R9km-NB with/without a subgrid scale brine rejection parameterization which improves ocean vertical mixing under sea ice. Higher-resolution and new sea-ice processes contributed to lower model errors in sea-ice extent, ice thickness, and ice algae. In the Bering Sea shelf, only higher resolution contributed to lower model errors in salinity, nitrate (NO3), and chlorophyll-a (Chl-a). In the Arctic Basin, model errors in mixed layer depth (MLD) were reduced 36% by brine rejection parameterization, 20% by new sea-ice processes, and 6% by higher resolution. The NO3 concentration biases were caused by both MLD bias and coarse resolution, because of excessive horizontal mixing of high NO3 from the Chukchi Sea into the Canada Basin in coarse resolution models. R9km showed improvements over G1deg on NO3, but not on Chl-a, likely due to light limitation under snow and ice cover in the Arctic Basin.

  14. Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model

    ERIC Educational Resources Information Center

    Pan, Cheng-Chang

    2012-01-01

    The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…

  15. Use of a mixing model to investigate groundwater-surface water mixing and nitrogen biogeochemistry in the bed of a groundwater-fed river

    NASA Astrophysics Data System (ADS)

    Lansdown, Katrina; Heppell, Kate; Ullah, Sami; Heathwaite, A. Louise; Trimmer, Mark; Binley, Andrew; Heaton, Tim; Zhang, Hao

    2010-05-01

    The dynamics of groundwater and surface water mixing and associated nitrogen transformations in the hyporheic zone have been investigated within a gaining reach of a groundwater-fed river (River Leith, Cumbria, UK). The regional aquifer consists of Permo-Triassic sandstone, which is overlain by varying depths of glaciofluvial sediments (~15 to 50 cm) to form the river bed. The reach investigated (~250m long) consists of a series of riffle and pool sequences (Käser et al. 2009), with other geomorphic features such as vegetated islands and marginal bars also present. A network of 17 piezometers, each with six depth-distributed pore water samplers based on the design of Rivett et al. (2008), was installed in the river bed in June 2009. An additional 18 piezometers with a single pore water sampler were installed in the riparian zone along the study reach. Water samples were collected from the pore water samplers on three occasions during summer 2009, a period of low flow. The zone of groundwater-surface water mixing within the river bed sediments was inferred from depth profiles (0 to 100 cm) of conservative chemical species and isotopes of water with the collected samples. Sediment cores collected during piezometer installation also enabled characterisation of grain size within the hyporheic zone. A multi-component mixing model was developed to quantify the relative contributions of different water sources (surface water, groundwater and bank exfiltration) to the hyporheic zone. Depth profiles of ‘predicted' nitrate concentration were constructed using the relative contribution of each water source to the hyporheic and the nitrate concentration of the end members. This approach assumes that the mixing of different sources of water is the only factor controlling the nitrate concentration of pore water in the river bed sediments. Comparison of predicted nitrate concentrations (which assume only mixing of waters with different nitrate concentrations) with actual

  16. Production of a sterile species via active-sterile mixing: An exactly solvable model

    NASA Astrophysics Data System (ADS)

    Boyanovsky, D.

    2007-11-01

    The production of a sterile species via active-sterile mixing in a thermal medium is studied in an exactly solvable model. The exact time evolution of the sterile distribution function is determined by the dispersion relations and damping rates Γ1,2 for the quasiparticle modes. These depend on γ˜=Γaa/2ΔE, with Γaa the interaction rate of the active species in absence of mixing and ΔE the oscillation frequency in the medium without damping. γ˜≪1, γ˜≫1 describe the weak and strong damping limits, respectively. For γ˜≪1, Γ1=Γaacos⁡2θm; Γ2=Γaasin⁡2θm where θm is the mixing angle in the medium and the sterile distribution function does not obey a simple rate equation. For γ˜≫1, Γ1=Γaa and Γ2=Γaasin⁡22θm/4γ˜2, is the sterile production rate. In this regime sterile production is suppressed and the oscillation frequency vanishes at an Mikheyev-Smirnov-Wolfenstein (MSW) resonance, with a breakdown of adiabaticity. These are consequences of quantum Zeno suppression. For active neutrinos with standard model interactions the strong damping limit is only available near an MSW resonance if sin⁡2θ≪αw with θ the vacuum mixing angle. The full set of quantum kinetic equations for sterile production for arbitrary γ˜ are obtained from the quantum master equation. Cosmological resonant sterile neutrino production is quantum Zeno suppressed relieving potential uncertainties associated with the QCD phase transition.

  17. Software engineering the mixed model for genome-wide association studies on large samples

    USDA-ARS?s Scientific Manuscript database

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  18. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  19. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  20. Stratified mixing by microorganisms

    NASA Astrophysics Data System (ADS)

    Wagner, Gregory; Young, William; Lauga, Eric

    2013-11-01

    Vertical mixing is of fundamental significance to the general circulation, climate, and life in the ocean. In this work we consider whether organisms swimming at low Reynolds numbers might collectively contribute substantially to vertical mixing. Scaling analysis indicates that the mixing efficiency η, or the ratio between the rate of potential energy conversion and total work done on the fluid, should scale with η ~(a / l) 3 as a / l --> 0 , where a is the size of the organism and l = (νκ /N2)1/4 is an intrinsic length scale of a stratified fluid with kinematic viscosity ν, tracer diffusivity κ, and buoyancy frequency N2. A regularized singularity model demonstrates this scaling, indicating that in this same limit η ~ 1.2 (a / l) 3 for vertical swimming and η ~ 0.14 (a / l ) 3 for horizontal swimming. The model further predicts the absolute maximum mixing efficiency of an ensemble of randomly oriented organisms is around 6% and that the greatest mixing efficiencies in the ocean (in regions of strong salt-stratification) are closer to 0.1%, implying that the total contribution of microorganisms to vertical ocean mixing is negligible.

  1. Modelling exhaust plume mixing in the near field of an aircraft

    NASA Astrophysics Data System (ADS)

    Garnier, F.; Brunet, S.; Jacquin, L.

    1997-11-01

    A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration) shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet) has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.

  2. Physical Modelling of the Effect of Slag and Top-Blowing on Mixing in the AOD Process

    NASA Astrophysics Data System (ADS)

    Haas, Tim; Visuri, Ville-Valtteri; Kärnä, Aki; Isohookana, Erik; Sulasalmi, Petri; Eriç, Rauf Hürman; Pfeifer, Herbert; Fabritius, Timo

    The argon-oxygen decarburization (AOD) process is the most common process for refining stainless steel. High blowing rates and the resulting efficient mixing of the steel bath are characteristic of the AOD process. In this work, a 1:9-scale physical model was used to study mixing in a 150 t AOD vessel. Water, air and rapeseed oil were used to represent steel, argon and slag, respectively, while the dynamic similarity with the actual converter was maintained using the modified Froude number and the momentum number. Employing sulfuric acid as a tracer, the mixing times were determined on the basis of pH measurements according to the 97.5% criterion. The gas blowing rate and slag-steel volume ratio were varied in order to study their effect on the mixing time. The effect of top-blowing was also investigated. The results suggest that mixing time decreases as the modified Froude number of the tuyères increases and that the presence of a slag layer increases the mixing time. Furthermore, top-blowing was found to increase the mixing time both with and without the slag layer.

  3. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).

    V...

  4. Modeling snag dynamics in northern Arizona mixed-conifer and ponderosa pine forests

    Treesearch

    Joseph L. Ganey; Scott C. Vojta

    2007-01-01

    Snags (standing dead trees) are important components of forested habitats that contribute to ecological decay and recycling processes as well as providing habitat for many life forms. As such, snags are of special interest to land managers, but information on dynamics of snag populations is lacking. We modeled trends in snag populations in mixed-conifer and ponderosa...

  5. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  6. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE PAGES

    Gao, Zheng; Liu, Yangang; Li, Xiaolin; ...

    2018-02-19

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  7. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zheng; Liu, Yangang; Li, Xiaolin

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  8. Structure Elucidation of Mixed-Linker Zeolitic Imidazolate Frameworks by Solid-State (1)H CRAMPS NMR Spectroscopy and Computational Modeling.

    PubMed

    Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar

    2016-06-15

    Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The

  9. Mixing of Supersonic Streams

    NASA Technical Reports Server (NTRS)

    Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.

    1998-01-01

    The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 19 equivalent rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.

  10. Mixing of Supersonic Streams

    NASA Technical Reports Server (NTRS)

    Hawk, C. W.; Landrum, D. B.; Muller, S.; Turner, M.; Parkinson, D.

    1998-01-01

    The Strutjet approach to Rocket Based Combined Cycle (RBCC) propulsion depends upon fuel-rich flows from the rocket nozzles and turbine exhaust products mixing with the ingested air for successful operation in the ramjet and scramjet modes. It is desirable to delay this mixing process in the air-augmented mode of operation present during low speed flight. A model of the Strutjet device has been built and is undergoing test to investigate the mixing of the streams as a function of distance from the Strutjet exit plane during simulated low speed flight conditions. Cold flow testing of a 1/6 scale Strutjet model is underway and nearing completion. Planar Laser Induced Fluorescence (PLIF) diagnostic methods are being employed to observe the mixing of the turbine exhaust gas with the gases from both the primary rockets and the ingested air simulating low speed, air augmented operation of the RBCC. The ratio of the pressure in the turbine exhaust duct to that in the rocket nozzle wall at the point of their intersection is the independent variable in these experiments. Tests were accomplished at values of 1.0, 1.5 and 2.0 for this parameter. Qualitative results illustrate the development of the mixing zone from the exit plane of the model to a distance of about 10 rocket nozzle exit diameters downstream. These data show the mixing to be confined in the vertical plane for all cases, The lateral expansion is more pronounced at a pressure ratio of 1.0 and suggests that mixing with the ingested flow would be likely beginning at a distance of 7 nozzle exit diameters downstream of the nozzle exit plane.

  11. Mixed dark matter in left-right symmetric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g R = g L. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less

  12. Mixed dark matter in left-right symmetric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W{sup ′} boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g{sub R}=g{sub L}. This region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less

  13. Mixed dark matter in left-right symmetric models

    DOE PAGES

    Berlin, Asher; Fox, Patrick J.; Hooper, Dan; ...

    2016-06-08

    Motivated by the recently reported diboson and dijet excesses in Run 1 data at ATLAS and CMS, we explore models of mixed dark matter in left-right symmetric theories. In this study, we calculate the relic abundance and the elastic scattering cross section with nuclei for a number of dark matter candidates that appear within the fermionic multiplets of left-right symmetric models. In contrast to the case of pure multiplets, WIMP-nucleon scattering proceeds at tree-level, and hence the projected reach of future direct detection experiments such as LUX-ZEPLIN and XENON1T will cover large regions of parameter space for TeV-scale thermal darkmore » matter. Decays of the heavy charged W' boson to particles in the dark sector can potentially shift the right-handed gauge coupling to larger values when fixed to the rate of the Run 1 excesses, moving towards the theoretically attractive scenario, g R = g L. Furthermore, this region of parameter space may be probed by future collider searches for new Higgs bosons or electroweak fermions.« less

  14. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  15. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  16. Financial modeling/case-mix analysis.

    PubMed

    Heck, S; Esmond, T

    1983-06-01

    The authors describe a case mix system developed by users which goes beyond DRG requirements to respond to management's clinical/financial data needs for marketing, planning, budgeting and financial analysis as well as reimbursement. Lessons learned in development of the system and the clinical/financial base will be helpful to those currently contemplating the implementation of such a system or evaluating available software.

  17. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  18. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  19. Scaling laws and reduced-order models for mixing and reactive-transport in heterogeneous anisotropic porous media

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.

    2016-12-01

    Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On

  20. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  1. Mapping nighttime PM2.5 from VIIRS DNB using a linear mixed-effect model

    NASA Astrophysics Data System (ADS)

    Fu, D.; Xia, X.; Duan, M.; Zhang, X.; Li, X.; Wang, J.; Liu, J.

    2018-04-01

    Estimation of particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) from daytime satellite aerosol products is widely reported in the literature; however, remote sensing of nighttime surface PM2.5 from space is very limited. PM2.5 shows a distinct diurnal cycle and PM2.5 concentration at 1:00 local standard time (LST) has a linear correlation coefficient (R) of 0.80 with daily-mean PM2.5. Therefore, estimation of nighttime PM2.5 is required toward an improved understanding of temporal variation of PM2.5 and its effects on air quality. Using data from the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) and hourly PM2.5 data at 35 stations in Beijing, a mixed-effect model is developed here to estimate nighttime PM2.5 from nighttime light radiance measurements based on the assumption that the DNB-PM2.5 relationship is constant spatially but varies temporally. Cross-validation showed that the model developed using all stations predict daily PM2.5 with mean determination coefficient (R2) of 0.87 ± 0.12, 0.83 ± 0.10 , 0.87 ± 0.09, 0.83 ± 0.10 in spring, summer, autumn and winter. Further analysis showed that the best model performance was achieved in urban stations with average cross-validation R2 of 0.92. In rural stations, DNB light signal is weak and was likely smeared by lunar illuminance that resulted in relatively poor estimation of PM2.5. The fixed and random parameters of the mixed-effect model in urban stations differed from those in suburban stations, which indicated that the assumption of the mixed-effect model should be carefully evaluated when used at a regional scale.

  2. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  3. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  4. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  5. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  6. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  7. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  8. Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.

    PubMed

    Lin, Tsung-I; Wang, Wan-Lun

    2017-10-01

    In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Effect of Low Magnitude Mechanical Stimuli on Bone Density and Structure in Pediatric Crohn's Disease: A Randomized Placebo Controlled Trial

    PubMed Central

    Leonard, Mary B.; Shults, Justine; Long, Jin; Baldassano, Robert N.; Brown, J. Keenan; Hommel, Kevin; Zemel, Babette S.; Mahboubi, Soroosh; Whitehead, Krista Howard; Herskovitz, Rita; Lee, Dale; Rausch, Joseph; Rubin, Clinton T.

    2016-01-01

    Pediatric Crohn's Disease (CD) is associated with low trabecular bone mineral density (BMD), cortical area, and muscle mass. Low magnitude mechanical stimulation (LMMS) may be anabolic. We conducted a 12 month randomized double-blind placebo-controlled trial of 10 minutes daily exposure to LMMS (30 Hz frequency, 0.3 g peak to peak acceleration). The primary outcomes were tibia trabecular BMD and cortical area by peripheral quantitative CT (pQCT) and vertebral trabecular BMD by QCT; additional outcomes included DXA whole body, hip and spine BMD, and leg lean mass. Results were expressed as sex-specific Z-scores relative to age. CD participants, ages 8-21 years with tibia trabecular BMD < 25th percentile for age were eligible and received daily cholecalciferol (800 IU) and calcium (1,000 mg). In total, 138 enrolled (48% male) and 121 (61 active, 60 placebo) completed the 12-month trial. Median adherence measured with an electronic monitor was 79% and did not differ between arms. By intention-to-treat analysis, LMMS had no significant effect on pQCT or DXA outcomes. The mean change in spine QCT trabecular BMD Z-score was +0.22 in the active arm and −0.02 in the placebo arm [difference in change 0.24 (95% CI 0.04, 0.44); p=0.02]. Among those with > 50% adherence, the effect was 0.38 (0.17, 0.58, p<0.0005). Within the active arm, each 10% greater adherence was associated with a 0.06 (0.01, 1.17, p=0.03) greater increase in spine QCT BMD Z-score. Treatment response did not vary according to baseline BMI Z-score, pubertal status, CD severity, or concurrent glucocorticoid or biologic medications. In all participants combined, height, pQCT trabecular BMD and cortical area and DXA outcomes improved significantly. In conclusion, LMMS was associated with increases in vertebral trabecular BMD by QCT; however, no effects were observed at DXA or pQCT sites. PMID:26821779

  10. A mixed integer program to model spatial wildfire behavior and suppression placement decisions

    Treesearch

    Erin J. Belval; Yu Wei; Michael Bevers

    2015-01-01

    Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...

  11. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Validation of an ocean shelf model for the prediction of mixed-layer properties in the Mediterranean Sea west of Sardinia

    NASA Astrophysics Data System (ADS)

    Onken, Reiner

    2017-04-01

    The Regional Ocean Modeling System (ROMS) has been employed to explore the sensitivity of the forecast skill of mixed-layer properties to initial conditions, boundary conditions, and vertical mixing parameterisations. The initial and lateral boundary conditions were provided by the Mediterranean Forecasting System (MFS) or by the MERCATOR global ocean circulation model via one-way nesting; the initial conditions were additionally updated through the assimilation of observations. Nowcasts and forecasts from the weather forecast models COSMO-ME and COSMO-IT, partly melded with observations, served as surface boundary conditions. The vertical mixing was parameterised by the GLS (generic length scale) scheme Umlauf and Burchard (2003) in four different set-ups. All ROMS forecasts were validated against the observations which were taken during the REP14-MED survey to the west of Sardinia. Nesting ROMS in MERCATOR and updating the initial conditions through data assimilation provided the best agreement of the predicted mixed-layer properties with the time series from a moored thermistor chain. Further improvement was obtained by the usage of COSMO-ME atmospheric forcing, which was melded with real observations, and by the application of the k-ω vertical mixing scheme with increased vertical eddy diffusivity. The predicted temporal variability of the mixed-layer temperature was reasonably well correlated with the observed variability, while the modelled variability of the mixed-layer depth exhibited only agreement with the observations near the diurnal frequency peak. For the forecasted horizontal variability, reasonable agreement was found with observations from a ScanFish section, but only for the mesoscale wave number band; the observed sub-mesoscale variability was not reproduced by ROMS.

  13. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  14. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  15. A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows

    USGS Publications Warehouse

    Uncles, R.J.; Peterson, D.H.

    1995-01-01

    A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.

  16. CFD of mixing of multi-phase flow in a bioreactor using population balance model.

    PubMed

    Sarkar, Jayati; Shekhawat, Lalita Kanwar; Loomba, Varun; Rathore, Anurag S

    2016-05-01

    Mixing in bioreactors is known to be crucial for achieving efficient mass and heat transfer, both of which thereby impact not only growth of cells but also product quality. In a typical bioreactor, the rate of transport of oxygen from air is the limiting factor. While higher impeller speeds can enhance mixing, they can also cause severe cell damage. Hence, it is crucial to understand the hydrodynamics in a bioreactor to achieve optimal performance. This article presents a novel approach involving use of computational fluid dynamics (CFD) to model the hydrodynamics of an aerated stirred bioreactor for production of a monoclonal antibody therapeutic via mammalian cell culture. This is achieved by estimating the volume averaged mass transfer coefficient (kL a) under varying conditions of the process parameters. The process parameters that have been examined include the impeller rotational speed and the flow rate of the incoming gas through the sparger inlet. To undermine the two-phase flow and turbulence, an Eulerian-Eulerian multiphase model and k-ε turbulence model have been used, respectively. These have further been coupled with population balance model to incorporate the various interphase interactions that lead to coalescence and breakage of bubbles. We have successfully demonstrated the utility of CFD as a tool to predict size distribution of bubbles as a function of process parameters and an efficient approach for obtaining optimized mixing conditions in the reactor. The proposed approach is significantly time and resource efficient when compared to the hit and trial, all experimental approach that is presently used. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:613-628, 2016. © 2016 American Institute of Chemical Engineers.

  17. Oxygen diffusion model of the mixed (U,Pu)O2 ± x: Assessment and application

    NASA Astrophysics Data System (ADS)

    Moore, Emily; Guéneau, Christine; Crocombette, Jean-Paul

    2017-03-01

    The uranium-plutonium (U,Pu)O2 ± x mixed oxide (MOX) is used as a nuclear fuel in some light water reactors and considered for future reactor generations. To gain insight into fuel restructuring, which occurs during the fuel lifetime as well as possible accident scenarios understanding of the thermodynamic and kinetic behavior is crucial. A comprehensive evaluation of thermo-kinetic properties is incorporated in a computational CALPHAD type model. The present DICTRA based model describes oxygen diffusion across the whole range of plutonium, uranium and oxygen compositions and temperatures by incorporating vacancy and interstitial migration pathways for oxygen. The self and chemical diffusion coefficients are assessed for the binary UO2 ± x and PuO2 - x systems and the description is extended to the ternary mixed oxide (U,Pu)O2 ± x by extrapolation. A simulation to validate the applicability of this model is considered.

  18. The Effect of a Dissipative Ladle Shroud on Mixing in Tundish: Mathematical and Experimental Modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangshan; Yang, Shufeng; Li, Jingshe; Tang, Haiyan; Jiang, Zhengyi

    2018-01-01

    The effect of a dissipative ladle shroud (DLS) on mixing in tundish was investigated, compared with that of a conventional ladle shroud (CLS) using mathematical and physical modelling. The tracer profiles of mathematical results, achieved using large eddy simulation, were validated by physical observations employing high-speed cinephotography. The design of a DLS dramatically changed the flow patterns and contributed the intermixing of fluid elements inside the ladle shroud. The vortex flow encouraged the turbulent mixing and was verified by tracking of physical tracer dispersion inside the DLS. Residence Time Distribution (RTD) curves were obtained in two different sized tundishes to examine the mixing behaviours. The findings indicated that the DLS benefited the tundish mixing in terms of increasing active volume. The effect seemed to be more remarkable in the smaller tundish. The DLS gave rise to a more plug-like flow pattern inside the tundish, showing potential to shorten the transition length during grade change.

  19. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  20. On accommodating spatial interactions in a Generalized Heterogeneous Data Model (GHDM) of mixed types of dependent variables.

    DOT National Transportation Integrated Search

    2015-12-01

    We develop an econometric framework for incorporating spatial dependence in integrated model systems of latent variables and multidimensional mixed data outcomes. The framework combines Bhats Generalized Heterogeneous Data Model (GHDM) with a spat...

  1. Ab Initio Modeling of Structure and Properties of Single and Mixed Alkali Silicate Glasses.

    PubMed

    Baral, Khagendra; Li, Aize; Ching, Wai-Yim

    2017-10-12

    A density functional theory (DFT)-based ab initio molecular dynamics (AIMD) has been applied to simulate models of single and mixed alkali silicate glasses with two different molar concentrations of alkali oxides. The structural environments and spatial distributions of alkali ions in the 10 simulated models with 20% and 30% of Li, Na, K and equal proportions of Li-Na and Na-K are studied in detail for subtle variations among the models. Quantum mechanical calculations of electronic structures, interatomic bonding, and mechanical and optical properties are carried out for each of the models, and the results are compared with available experi