Sample records for additive mixed models

  1. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  2. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  3. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  4. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  5. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  6. Regression Analysis of Mixed Recurrent-Event and Panel-Count Data with Additive Rate Models

    PubMed Central

    Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L.

    2015-01-01

    Summary Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007; Zhao et al., 2011). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013). In this paper, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. PMID:25345405

  7. Regression analysis of mixed recurrent-event and panel-count data with additive rate models.

    PubMed

    Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L

    2015-03-01

    Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. © 2014, The International Biometric Society.

  8. A Lagrangian mixing frequency model for transported PDF modeling

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  9. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  10. Mixed Model Methods for Genomic Prediction and Variance Component Estimation of Additive and Dominance Effects Using SNP Markers

    PubMed Central

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162

  11. Mixed model methods for genomic prediction and variance component estimation of additive and dominance effects using SNP markers.

    PubMed

    Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo

    2014-01-01

    We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.

  12. Model-Independent Bounds on Kinetic Mixing

    DOE PAGES

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.

    2011-01-01

    New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less

  13. Modeling of mixing in 96-well microplates observed with fluorescence indicators.

    PubMed

    Weiss, Svenja; John, Gernot T; Klimant, Ingo; Heinzle, Elmar

    2002-01-01

    Mixing in 96-well microplates was studied using soluble pH indicators and a fluorescence pH sensor. Small amounts of alkali were added with the aid of a multichannel pipet, a piston pump, and a piezoelectric actuator. Mixing patterns were observed visually using a video camera. Addition of drops each of about 1 nL with the piezoelectric actuator resulted in umbrella and double-disklike shapes. Convective mixing was mainly observed in the upper part of the well, whereas the lower part was only mixed quickly when using the multichannel pipet and the piston pump with an addition volume of 5 microL or larger. Estimated mixing times were between a few seconds and several minutes. Mixing by liquid dispensing was much more effective than by shaking. A mixing model consisting of 21 elements could describe mixing dynamics observed by the dissolved fluorescence dye and by the optical immobilized pH sensor. This model can be applied for designing pH control in microplates or for design of kinetic experiments with liquid addition.

  14. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  15. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  16. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  17. Production of high-quality polydisperse construction mixes for additive 3D technologies.

    NASA Astrophysics Data System (ADS)

    Gerasimov, M. D.; Brazhnik, Yu V.; Gorshkov, P. S.; Latyshev, S. S.

    2018-03-01

    The paper describes a new design of a mixer allowing production of high quality polydisperse powders, used in additive 3D technologies. A new principle of dry powder particle mixing is considered, implementing a possibility of a close-to-ideal distribution of such particles in common space. A mathematical model of the mixer is presented, allowing evaluating quality indicators of the produced mixture. Experimental results are shown and rational values of process parameters of the mixer are obtained.

  18. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  19. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  20. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  1. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  2. Modeling of Low Feed-Through CD Mix Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, Steven; Greenough, Jeff; Casey, Daniel; Dittrich, Tom; Kahn, Shahab; Kyrala, George; Ma, Tammy; Salmonson, Jay; Smalyuk, Vladimir; Tipton, Robert

    2015-11-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the National Ignition Facility. However, the previous implosions suffered from large instability growth seeded from perturbations on the outside of the capsule. Recently, the separated reactants technique has been applied to two platforms designed to minimize this feed-through and isolate local mix at the gas-ablator interface: the Two Shock (TS) and Adiabat-Shaped (AS) Platforms. Additionally, the background contamination of Deuterium in the gas has been greatly reduced, allowing for simultaneous observation of TT, DT, and DD neutrons, which respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations with both a Reynolds-Averaged Navier Stokes method and an enhanced diffusivity model. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674867.

  3. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  4. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  5. Stability of a general mixed additive-cubic functional equation in non-Archimedean fuzzy normed spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Tianzhou; Rassias, John Michael; Xu Wanxin

    2010-09-15

    We establish some stability results concerning the general mixed additive-cubic functional equation in non-Archimedean fuzzy normed spaces. In addition, we establish some results of approximately general mixed additive-cubic mappings in non-Archimedean fuzzy normed spaces. The results improve and extend some recent results.

  6. Genomic Model with Correlation Between Additive and Dominance Effects.

    PubMed

    Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres

    2018-05-09

    Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant

  7. Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

    PubMed

    Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2012-06-01

    This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

  8. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like

  9. Use of lime as antistrip additive for mitigating moisture susceptibility of asphalt mixes containing baghouse fines.

    DOT National Transportation Integrated Search

    2005-08-31

    This study investigated the effectiveness of hydrated lime as an antistrip additive for mixes : containing excess baghouse fines. Wet process of lime addition was used without marination. One percent : lime was added to asphalt mixes containing 5.5% ...

  10. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  11. Ridge, Lasso and Bayesian additive-dominance genomic models.

    PubMed

    Azevedo, Camila Ferreira; de Resende, Marcos Deon Vilela; E Silva, Fabyano Fonseca; Viana, José Marcelo Soriano; Valente, Magno Sávio Ferreira; Resende, Márcio Fernando Ribeiro; Muñoz, Patricio

    2015-08-25

    A complete approach for genome-wide selection (GWS) involves reliable statistical genetics models and methods. Reports on this topic are common for additive genetic models but not for additive-dominance models. The objective of this paper was (i) to compare the performance of 10 additive-dominance predictive models (including current models and proposed modifications), fitted using Bayesian, Lasso and Ridge regression approaches; and (ii) to decompose genomic heritability and accuracy in terms of three quantitative genetic information sources, namely, linkage disequilibrium (LD), co-segregation (CS) and pedigree relationships or family structure (PR). The simulation study considered two broad sense heritability levels (0.30 and 0.50, associated with narrow sense heritabilities of 0.20 and 0.35, respectively) and two genetic architectures for traits (the first consisting of small gene effects and the second consisting of a mixed inheritance model with five major genes). G-REML/G-BLUP and a modified Bayesian/Lasso (called BayesA*B* or t-BLASSO) method performed best in the prediction of genomic breeding as well as the total genotypic values of individuals in all four scenarios (two heritabilities x two genetic architectures). The BayesA*B*-type method showed a better ability to recover the dominance variance/additive variance ratio. Decomposition of genomic heritability and accuracy revealed the following descending importance order of information: LD, CS and PR not captured by markers, the last two being very close. Amongst the 10 models/methods evaluated, the G-BLUP, BAYESA*B* (-2,8) and BAYESA*B* (4,6) methods presented the best results and were found to be adequate for accurately predicting genomic breeding and total genotypic values as well as for estimating additive and dominance in additive-dominance genomic models.

  12. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  13. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  14. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  15. Study of abrasive resistance of foundries models obtained with use of additive technology

    NASA Astrophysics Data System (ADS)

    Ol'khovik, Evgeniy

    2017-10-01

    A problem of determination of resistance of the foundry models and patterns from ABS (PLA) plastic, obtained by the method of 3D printing with using FDM additive technology, to abrasive wear and resistance in the environment of foundry sand mould is considered in the present study. The description of a technique and equipment for tests of castings models and patterns for wear is provided in the article. The manufacturing techniques of models with the use of the 3D printer (additive technology) are described. The scheme with vibration load was applied to samples tests. For the most qualitative research of influence of sandy mix on plastic, models in real conditions of abrasive wear have been organized. The results also examined the application of acrylic paintwork to the plastic model and a two-component coating. The practical offers and recommendation on production of master models with the use of FDM technology allowing one to reach indicators of durability, exceeding 2000 cycles of moulding in foundry sand mix, are described.

  16. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  17. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  18. Effects of two warm-mix additives on aging, rheological and failure properties of asphalt cements

    NASA Astrophysics Data System (ADS)

    Omari, Isaac Obeng

    Sustainable road construction and maintenance could be supported when excellent warm-mix additives are employed in the modification of asphalt. These warm-mix additives provide remedies for today's requirements such as fatigue cracking resistance, durability, thermal cracking resistance, rutting resistance and resistance to moisture damage. Warm-mix additives are based on waxes and surfactants which reduce energy consumption and carbon dioxide emissions significantly during the construction phase of the pavement. In this study, the effects of two warm mix additives, siloxane and oxidised polyethylene wax, on roofing asphalt flux (RAF) and asphalt modified with waste engine oil (655-7) were investigated to evaluate the rheological, aging and failure properties of the asphalt binders. In terms of the properties of these two different asphalts, RAF has proved to be superior quality asphalt whereas 655-7 is poor quality asphalt. The properties of the modified asphalt samples were measured by Superpave(TM) tests such as Dynamic Shear Rheometer (DSR) test and Bending Beam Rheometer (BBR) test as well as modified protocols such as the extended BBR (eBBR) test (LS-308) and the Double- Edge-Notched Tension (DENT) test (LS-299) after laboratory aging. In addition, the Avrami theory was used to gain an insight on the crystallization of asphalt or the waxes within the asphalt binder. This study has however shown that the eBBR and DENT tests are better tools for providing accurate specification tests to curb thermal and fatigue cracking in contemporary asphalt pavements.

  19. Experience with The Use of Warm Mix Asphalt Additives in Bitumen Binders

    NASA Astrophysics Data System (ADS)

    Cápayová, Silvia; Unčík, Stanislav; Cihlářová, Denisa

    2018-03-01

    In most European countries, Hot Mix Asphalt (HMA) technology is still being used as the standard for the production and processing of bituminous mixtures. However, from the perspective of environmental acceptability, global warming and greenhouse gas production, Slovakia is making an effort to put into practice modern technology, which is characterized by lower energy consumption and reducing negative impacts on the environment. Warm mix asphalt technologies (WMA), which have been verified at the Department of Transportation Engineering laboratory, Faculty of Civil Engineering, Slovak University of Technology (FCE, SUT) can provide the required mixture properties and can be used not only for the construction of new roads, but also for their renovation and reconstruction. The paper was created in cooperation with the Technical University of Ostrava, Czech Republic, which also deals with the addition of additives to asphalt mixtures and binders. It describes a comparison of the impact of some organic and chemical additives on the properties of commonly used bitumen binders in accordance with valid standards and technical regulations.

  20. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  1. Toward Better Modeling of Supercritical Turbulent Mixing

    NASA Technical Reports Server (NTRS)

    Selle, Laurent; Okongo'o, Nora; Bellan, Josette; Harstad, Kenneth

    2008-01-01

    study was done as part of an effort to develop computational models representing turbulent mixing under thermodynamic supercritical (here, high pressure) conditions. The question was whether the large-eddy simulation (LES) approach, developed previously for atmospheric-pressure compressible-perfect-gas and incompressible flows, can be extended to real-gas non-ideal (including supercritical) fluid mixtures. [In LES, the governing equations are approximated such that the flow field is spatially filtered and subgrid-scale (SGS) phenomena are represented by models.] The study included analyses of results from direct numerical simulation (DNS) of several such mixing layers based on the Navier-Stokes, total-energy, and conservation- of-chemical-species governing equations. Comparison of LES and DNS results revealed the need to augment the atmospheric- pressure LES equations with additional SGS momentum and energy terms. These new terms are the direct result of high-density-gradient-magnitude regions found in the DNS and observed experimentally under fully turbulent flow conditions. A model has been derived for the new term in the momentum equation and was found to perform well at small filter size but to deteriorate with increasing filter size. Several alternative models were derived for the new SGS term in the energy equation that would need further investigations to determine if they are too computationally intensive in LES.

  2. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    NASA Astrophysics Data System (ADS)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  3. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  4. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  5. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  6. Tunable, mixed-resolution modeling using library-based Monte Carlo and graphics processing units

    PubMed Central

    Mamonov, Artem B.; Lettieri, Steven; Ding, Ying; Sarver, Jessica L.; Palli, Rohith; Cunningham, Timothy F.; Saxena, Sunil; Zuckerman, Daniel M.

    2012-01-01

    Building on our recently introduced library-based Monte Carlo (LBMC) approach, we describe a flexible protocol for mixed coarse-grained (CG)/all-atom (AA) simulation of proteins and ligands. In the present implementation of LBMC, protein side chain configurations are pre-calculated and stored in libraries, while bonded interactions along the backbone are treated explicitly. Because the AA side chain coordinates are maintained at minimal run-time cost, arbitrary sites and interaction terms can be turned on to create mixed-resolution models. For example, an AA region of interest such as a binding site can be coupled to a CG model for the rest of the protein. We have additionally developed a hybrid implementation of the generalized Born/surface area (GBSA) implicit solvent model suitable for mixed-resolution models, which in turn was ported to a graphics processing unit (GPU) for faster calculation. The new software was applied to study two systems: (i) the behavior of spin labels on the B1 domain of protein G (GB1) and (ii) docking of randomly initialized estradiol configurations to the ligand binding domain of the estrogen receptor (ERα). The performance of the GPU version of the code was also benchmarked in a number of additional systems. PMID:23162384

  7. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  8. MixSIAR: A Bayesian stable isotope mixing model for characterizing intrapopulation niche variation

    EPA Science Inventory

    Background/Question/Methods The science of stable isotope mixing models has tended towards the development of modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances or syntheses of the current state of the art are published in parity with software packa...

  9. Performance testing of asphalt concrete containing crumb rubber modifier and warm mix additives

    NASA Astrophysics Data System (ADS)

    Ikpugha, Omo John

    Utilisation of scrap tire has been achieved through the production of crumb rubber modified binders and rubberised asphalt concrete. Terminal and field blended asphalt rubbers have been developed through the wet process to incorporate crumb rubber into the asphalt binder. Warm mix asphalt technologies have been developed to curb the problem associated with the processing and production of such crumb rubber modified binders. Also the lowered production and compaction temperatures associated with warm mix additives suggests the possibility of moisture retention in the mix, which can lead to moisture damage. Conventional moisture sensitivity tests have not effectively discriminated good and poor mixes, due to the difficulty of simulating field moisture damage mechanisms. This study was carried out to investigate performance properties of crumb rubber modified asphalt concrete, using commercial warm mix asphalt technology. Commonly utilised asphalt mixtures in North America such as dense graded and stone mastic asphalt were used in this study. Uniaxial Cyclic Compression Testing (UCCT) was used to measure permanent deformation at high temperatures. Indirect Tensile Testing (IDT) was used to investigate low temperature performance. Moisture Induced Sensitivity Testing (MiST) was proposed to be an effective method for detecting the susceptibility of asphalt mixtures to moisture damage, as it incorporates major field stripping mechanisms. Sonnewarm(TM), Sasobit(TM) and Evotherm(TM) additives improved the resistance to permanent deformation of dense graded mixes at a loading rate of 0.5 percent by weight of the binder. Polymer modified mixtures showed superior resistance to permanent deformation compared to asphalt rubber in all mix types. Rediset(TM) WMX improves low temperature properties of dense graded mixes at 0.5 percent loading on the asphalt cement. Rediset LQ and Rediset WMX showed good anti stripping properties at 0.5 percent loading on the asphalt cement. The

  10. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  11. Use and abuse of mixing models (MixSIAR)

    EPA Science Inventory

    Background/Question/MethodsCharacterizing trophic links in food webs is a fundamental ecological question. In our efforts to quantify energy flow through food webs, ecologists have increasingly used mixing models to analyze biological tracer data, often from stable isotopes. Whil...

  12. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive

  13. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive

  14. Performance of warm mix asphalt with Buton natural asphalt-rubber and zeolite as an additives

    NASA Astrophysics Data System (ADS)

    Wahjuningsih, N.; Hadiwardoyo, S. P.; Sumabrata, R. J.; Anis, M.

    2018-01-01

    The aim of this research is improving of asphalt industry to decrease the fuel consumption by lowering the temperature of mixing and compacting of asphalt mixture. This technology known as Warm Mix Asphalt (WMA). Buton Natural Asphalt Rubber (BNA-R) as a function of the additive has been able to improve the performance of HMA. Zeolit has a function as an additive to lowering the mixing temperature. Aggregate composition using the composition of aggregate grading specifications for airport pavement, with the composition of BNA-R 5% and 10% and Zeolite content of 2%. The mixture have produced Resilient Modulus value by using the Universal Material Testing Apparatus (UMATTA) on optimum bitumen content each of which has been obtained from the Marshall test. Furthermore, the value of permanent deformation of asphalt mixtures tested using Wheel Tracking Machine (WTM). The result shows that BNA-R modified binder for WMA can decrease the rutting potential. The additive of local materials has improved the performance of the WMA for airport pavement with certain restrictions. From this research it is known there have been changes in the characteristics of resilient modulus and permanent deformation due to the addition of BNA-R for type of aggregate composition.

  15. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.

  16. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    PubMed

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  18. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  19. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  20. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  1. On the coalescence-dispersion modeling of turbulent molecular mixing

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Kosaly, George

    1987-01-01

    The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.

  2. Analysis of the type II robotic mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  3. Mixing behavior of a model cellulosic biomass slurry during settling and resuspension

    DOE PAGES

    Crawford, Nathan C.; Sprague, Michael A.; Stickel, Jonathan J.

    2016-01-29

    Thorough mixing during biochemical deconstruction of biomass is crucial for achieving maximum process yields and economic success. However, due to the complex morphology and surface chemistry of biomass particles, biomass mixing is challenging and currently it is not well understood. This study investigates the bulk rheology of negatively buoyant, non-Brownian α-cellulose particles during settling and resuspension. The torque signal of a vane mixer across two distinct experimental setups (vane-in-cup and vane-in-beaker) was used to understand how mixing conditions affect the distribution of biomass particles. During experimentation, a bifurcated torque response as a function of vane speed was observed, indicating thatmore » the slurry transitions from a “settling-dominant” regime to a “suspension-dominant” regime. The torque response of well-characterized fluids (i.e., DI water) were then used to empirically identify when sufficient mixing turbulence was established in each experimental setup. The predicted critical mixing speeds were in agreement with measured values, suggesting that secondary flows are required in order to keep the cellulose particles fully suspended. In addition, a simple scaling relationship was developed to model the entire torque signal of the slurry throughout settling and resuspension. Furthermore, qualitative and semi-quantitative agreement between the model and experimental results was observed.« less

  4. Modeling populations of rotationally mixed massive stars

    NASA Astrophysics Data System (ADS)

    Brott, I.

    2011-02-01

    Massive stars can be considered as cosmic engines. With their high luminosities, strong stellar winds and violent deaths they drive the evolution of galaxies through-out the history of the universe. Despite the importance of massive stars, their evolution is still poorly understood. Two major issues have plagued evolutionary models of massive stars until today: mixing and mass loss On the main sequence, the effects of mass loss remain limited in the considered mass and metallicity range, this thesis concentrates on the role of mixing in massive stars. This thesis approaches this problem just on the cross road between observations and simulations. The main question: Do evolutionary models of single stars, accounting for the effects of rotation, reproduce the observed properties of real stars. In particular we are interested if the evolutionary models can reproduce the surface abundance changes during the main-sequence phase. To constrain our models we build a population synthesis model for the sample of the VLT-FLAMES Survey of Massive stars, for which star-formation history and rotational velocity distribution are well constrained. We consider the four main regions of the Hunter diagram. Nitrogen un-enriched slow rotators and nitrogen enriched fast rotators that are predicted by theory. Nitrogen enriched slow rotators and nitrogen unenriched fast rotators that are not predicted by our model. We conclude that currently these comparisons are not sufficient to verify the theory of rotational mixing. Physical processes in addition to rotational mixing appear necessary to explain the stars in the later two regions. The chapters of this Thesis have been published in the following Journals: Ch. 2: ``Rotating Massive Main-Sequence Stars I: Grids of Evolutionary Models and Isochrones'', I. Brott, S. E. de Mink, M. Cantiello, N. Langer, A. de Koter, C. J. Evans, I. Hunter, C. Trundle, J.S. Vink submitted to Astronomy & Astrop hysics Ch. 3: ``The VLT-FLAMES Survey of Massive

  5. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  6. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, Richard P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end‐members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end‐members, an extension of the mathematics of mixing models is presented that assesses the “fit” of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end‐members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end‐members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  7. Quantifying spatial distribution of spurious mixing in ocean models.

    PubMed

    Ilıcak, Mehmet

    2016-12-01

    Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.

  8. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  9. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  10. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  11. The Toxic Effects of Cigarette Additives. Philip Morris' Project Mix Reconsidered: An Analysis of Documents Released through Litigation

    PubMed Central

    Wertz, Marcia S.; Kyriss, Thomas; Paranjape, Suman; Glantz, Stanton A.

    2011-01-01

    Background In 2009, the promulgation of US Food and Drug Administration (FDA) tobacco regulation focused attention on cigarette flavor additives. The tobacco industry had prepared for this eventuality by initiating a research program focusing on additive toxicity. The objective of this study was to analyze Philip Morris' Project MIX as a case study of tobacco industry scientific research being positioned strategically to prevent anticipated tobacco control regulations. Methods and Findings We analyzed previously secret tobacco industry documents to identify internal strategies for research on cigarette additives and reanalyzed tobacco industry peer-reviewed published results of this research. We focused on the key group of studies conducted by Phillip Morris in a coordinated effort known as “Project MIX.” Documents showed that Project MIX subsumed the study of various combinations of 333 cigarette additives. In addition to multiple internal reports, this work also led to four peer-reviewed publications (published in 2001). These papers concluded that there was no evidence of substantial toxicity attributable to the cigarette additives studied. Internal documents revealed post hoc changes in analytical protocols after initial statistical findings indicated an additive-associated increase in cigarette toxicity as well as increased total particulate matter (TPM) concentrations in additive-modified cigarette smoke. By expressing the data adjusted by TPM concentration, the published papers obscured this underlying toxicity and particulate increase. The animal toxicology results were based on a small number of rats in each experiment, raising the possibility that the failure to detect statistically significant changes in the end points was due to underpowering the experiments rather than lack of a real effect. Conclusion The case study of Project MIX shows tobacco industry scientific research on the use of cigarette additives cannot be taken at face value. The

  12. Reduction of spalling in mixed metal oxide desulfurization sorbents by addition of a large promoter metal oxide

    DOEpatents

    Poston, J.A.

    1997-12-02

    Mixed metal oxide pellets for removing hydrogen sulfide from fuel gas mixes derived from coal are stabilized for operation over repeated cycles of desulfurization and regeneration reactions by addition of a large promoter metal oxide such as lanthanum trioxide. The pellets, which may be principally made up of a mixed metal oxide such as zinc titanate, exhibit physical stability and lack of spalling or decrepitation over repeated cycles without loss of reactivity. The lanthanum oxide is mixed with pellet-forming components in an amount of 1 to 10 weight percent.

  13. Reduction of spalling in mixed metal oxide desulfurization sorbents by addition of a large promoter metal oxide

    DOEpatents

    Poston, James A.

    1997-01-01

    Mixed metal oxide pellets for removing hydrogen sulfide from fuel gas mixes derived from coal are stabilized for operation over repeated cycles of desulfurization and regeneration reactions by addition of a large promoter metal oxide such as lanthanum trioxide. The pellets, which may be principally made up of a mixed metal oxide such as zinc titanate, exhibit physical stability and lack of spalling or decrepitation over repeated cycles without loss of reactivity. The lanthanum oxide is mixed with pellet-forming components in an amount of 1 to 10 weight percent.

  14. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  15. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  16. Two-length-scale turbulence model for self-similar buoyancy-, shock-, and shear-driven mixing

    DOE PAGES

    Morgan, Brandon E.; Schilling, Oleg; Hartland, Tucker A.

    2018-01-10

    The three-equation k-L-a turbulence model [B. Morgan and M. Wickett, Three-equation model for the self-similar growth of Rayleigh-Taylor and Richtmyer-Meshkov instabilities," Phys. Rev. E 91 (2015)] is extended by the addition of a second length scale equation. It is shown that the separation of turbulence transport and turbulence destruction length scales is necessary for simultaneous prediction of the growth parameter and turbulence intensity of a Kelvin-Helmholtz shear layer when model coeficients are constrained by similarity analysis. Constraints on model coeficients are derived that satisfy an ansatz of self-similarity in the low-Atwood-number limit and allow the determination of model coeficients necessarymore » to recover expected experimental behavior. The model is then applied in one-dimensional simulations of Rayleigh-Taylor, reshocked Richtmyer-Meshkov, Kelvin{Helmholtz, and combined Rayleigh-Taylor/Kelvin-Helmholtz instability mixing layers to demonstrate that the expected growth rates are recovered numerically. Finally, it is shown that model behavior in the case of combined instability is to predict a mixing width that is a linear combination of Rayleigh-Taylor and Kelvin-Helmholtz mixing processes.« less

  17. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  18. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  19. Dark matter and electroweak phase transition in the mixed scalar dark matter model

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Bian, Ligong

    2018-03-01

    We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.

  20. A Turbulence model taking into account the longitudinal flow inhomogeneity in mixing layers and jets

    NASA Astrophysics Data System (ADS)

    Troshin, A. I.

    2017-06-01

    The problem of potential core length overestimation of subsonic free jets by Reynolds-averaged Navier-Stokes (RANS) based turbulence models is addressed. It is shown that the issue is due to the incorrect velocity profile modeling of the jet mixing layers. An additional source term in ω equation is proposed which takes into account the effect of longitudinal flow inhomogeneity on turbulence in mixing layers. Computations confirm that the modified Speziale-Sarkar-Gatski/Launder- Reece-Rodi-omega (SSG/LRR-ω) turbulence model correctly predicts the mean velocity profiles in both initial and far-field regions of subsonic free plane jet as well as the centerline velocity decay rate.

  1. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  2. Thermal Stability of Nanocrystalline Alloys by Solute Additions and A Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Saber, Mostafa

    and alpha → gamma phase transformation in Fe-Ni-Zr alloys. In addition to the experimental study of thermal stabilization of nanocrystalline Fe-Cr-Zr or Fe-Ni-Zr alloys, the thesis presented here developed a new predictive model, applicable to strongly segregating solutes, for thermodynamic stabilization of binary alloys. This model can serve as a benchmark for selecting solute and evaluating the possible contribution of stabilization. Following a regular solution model, both the chemical and elastic strain energy contributions are combined to obtain the mixing enthalpy. The total Gibbs free energy of mixing is then minimized with respect to simultaneous variations in the grain boundary volume fraction and the solute concentration in the grain boundary and the grain interior. The Lagrange multiplier method was used to obtained numerical solutions. Application are given for the temperature dependence of the grain size and the grain boundary solute excess for selected binary system where experimental results imply that thermodynamic stabilization could be operative. This thesis also extends the binary model to a new model for thermodynamic stabilization of ternary nanocrystalline alloys. It is applicable to strongly segregating size-misfit solutes and uses input data available in the literature. In a same manner as the binary model, this model is based on a regular solution approach such that the chemical and elastic strain energy contributions are incorporated into the mixing enthalpy DeltaHmix, and the mixing entropy DeltaSmix is obtained using the ideal solution approximation. The Gibbs mixing free energy Delta Gmix is then minimized with respect to simultaneous variations in grain growth and solute segregation parameters. The Lagrange multiplier method is similarly used to obtain numerical solutions for the minimum Delta Gmix. The temperature dependence of the nanocrystalline grain size and interfacial solute excess can be obtained for selected ternary systems. As

  3. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  6. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  7. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  8. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  9. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  10. MRMAide: a mixed resolution modeling aide

    NASA Astrophysics Data System (ADS)

    Treshansky, Allyn; McGraw, Robert M.

    2002-07-01

    The Mixed Resolution Modeling Aide (MRMAide) technology is an effort to semi-automate the implementation of Mixed Resolution Modeling (MRM). MRMAide suggests ways of resolving differences in fidelity and resolution across diverse modeling paradigms. The goal of MRMAide is to provide a technology that will allow developers to incorporate model components into scenarios other than those for which they were designed. Currently, MRM is implemented by hand. This is a tedious, error-prone, and non-portable process. MRMAide, in contrast, will automatically suggest to a developer where and how to connect different components and/or simulations. MRMAide has three phases of operation: pre-processing, data abstraction, and validation. During pre-processing the components to be linked together are evaluated in order to identify appropriate mapping points. During data abstraction those mapping points are linked via data abstraction algorithms. During validation developers receive feedback regarding their newly created models relative to existing baselined models. The current work presents an overview of the various problems encountered during MRM and the various technologies utilized by MRMAide to overcome those problems.

  11. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  12. MANOVA vs nonlinear mixed effects modeling: The comparison of growth patterns of female and male quail

    NASA Astrophysics Data System (ADS)

    Gürcan, Eser Kemal

    2017-04-01

    The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.

  13. Additive or non-additive effect of mixing oak in pine stands on soil properties depends on the tree species in Mediterranean forests.

    PubMed

    Brunel, Caroline; Gros, Raphael; Ziarelli, Fabio; Farnet Da Silva, Anne Marie

    2017-07-15

    This study investigated how oak abundance in pine stands (using relative Oak Basal Area %, OBA%) may modulate soil microbial functioning. Forests were composed of sclerophyllous species i.e. Quercus ilex mixed with Pinus halepensis Miller or of Q. pubescens mixed with P. sylvestris. We used a series of plots with OBA% ranging from 0 to 100% in the two types of stand (n=60) and both OLF and A-horizon compartments were analysed. Relations between OBA% and either soil chemical (C and N contents, quality of organic matter via solid-state NMR, pH, CaCO 3 ) or microbial (enzyme activities, basal respiration, biomass and catabolic diversity via BIOLOG) characteristics were described. OBA% increase led to a decrease in the recalcitrant fraction of organic matter (OM) in OLF and promoted microbial growth. Catabolic profiles of microbial communities from A-horizon were significantly modulated in Q. ilex and P. halepensis stand by OBA% and alkyl C to carboxyl C ratio (characteristic of cutin from Q. ilex tissues) and in Q. pubescens and P. sylvestris stands, by OBA% and pH. In A-horizon under Q. ilex and P. halepensis stands, linear regressions were found between catabolic diversity, microbial biomass and OBA% suggesting an additive effect. Conversely, in A-horizon Q. pubescens and P. sylvestris stands, the relationship between OBA% and either cellulase activities, polysaccharides or ammonium contents, suggested a non-additive effect of Q. pubescens and P. sylvestris, enhancing mineralization of the OM labile fraction for plots characterized by an OBA% ranging from 40% to 60%. Mixing oak with pine thus favored microbial dynamics in both type of stands though OBA% print varied with tree species and consequently sustainable soil functioning depend strongly on the composition of mixed stands. Our study indeed revealed that, when evaluating the benefits of forest mixed stand on soil microbial functioning and OM turnover, the identity of tree species has to be considered

  14. Bayesian stable isotope mixing models

    EPA Science Inventory

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  15. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  16. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  17. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  18. Box-Cox Mixed Logit Model for Travel Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.

    2010-09-01

    To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.

  19. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nagata, K.

    2016-08-01

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.

  20. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  1. Influence of the vertical mixing parameterization on the modeling results of the Arctic Ocean hydrology

    NASA Astrophysics Data System (ADS)

    Iakshina, D. F.; Golubeva, E. N.

    2017-11-01

    The vertical distribution of the hydrological characteristics in the upper ocean layer is mostly formed under the influence of turbulent and convective mixing, which are not resolved in the system of equations for large-scale ocean. Therefore it is necessary to include additional parameterizations of these processes into the numerical models. In this paper we carry out a comparative analysis of the different vertical mixing parameterizations in simulations of climatic variability of the Arctic water and sea ice circulation. The 3D regional numerical model for the Arctic and North Atlantic developed in the ICMMG SB RAS (Institute of Computational Mathematics and Mathematical Geophysics of the Siberian Branch of the Russian Academy of Science) and package GOTM (General Ocean Turbulence Model1,2, http://www.gotm.net/) were used as the numerical instruments . NCEP/NCAR reanalysis data were used for determination of the surface fluxes related to ice and ocean. The next turbulence closure schemes were used for the vertical mixing parameterizations: 1) Integration scheme based on the Richardson criteria (RI); 2) Second-order scheme TKE with coefficients Canuto-A3 (CANUTO); 3) First-order scheme TKE with coefficients Schumann and Gerz4 (TKE-1); 4) Scheme KPP5 (KPP). In addition we investigated some important characteristics of the Arctic Ocean state including the intensity of Atlantic water inflow, ice cover state and fresh water content in Beaufort Sea.

  2. Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.

    We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less

  3. Mix Model Comparison of Low Feed-Through Implosions

    NASA Astrophysics Data System (ADS)

    Pino, Jesse; MacLaren, S.; Greenough, J.; Casey, D.; Dewald, E.; Dittrich, T.; Khan, S.; Ma, T.; Sacks, R.; Salmonson, J.; Smalyuk, V.; Tipton, R.; Kyrala, G.

    2016-10-01

    The CD Mix campaign previously demonstrated the use of nuclear diagnostics to study the mix of separated reactants in plastic capsule implosions at the NIF. Recently, the separated reactants technique has been applied to the Two Shock (TS) implosion platform, which is designed to minimize this feed-through and isolate local mix at the gas-ablator interface and produce core yields in good agreement with 1D clean simulations. The effects of both inner surface roughness and convergence ratio have been probed. The TT, DT, and DD neutron signals respectively give information about core gas performance, gas-shell atomic mix, and heating of the shell. In this talk, we describe efforts to model these implosions using high-resolution 2D ARES simulations. Various methods of interfacial mix will be considered, including the Reynolds-Averaged Navier Stokes (RANS) KL method as well as and a multicomponent enhanced diffusivity model with species, thermal, and pressure gradient terms. We also give predictions of a upcoming campaign to investigate Mid-Z mixing by adding a Ge dopant to the CD layer. LLNL-ABS-697251 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  5. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  6. Photoionized Mixing Layer Models of the Diffuse Ionized Gas

    NASA Astrophysics Data System (ADS)

    Binette, Luc; Flores-Fajardo, Nahiely; Raga, Alejandro C.; Drissen, Laurent; Morisset, Christophe

    2009-04-01

    It is generally believed that O stars, confined near the galactic midplane, are somehow able to photoionize a significant fraction of what is termed the "diffuse ionized gas" (DIG) of spiral galaxies, which can extend up to 1-2 kpc above the galactic midplane. The heating of the DIG remains poorly understood, however, as simple photoionization models do not reproduce the observed line ratio correlations well or the DIG temperature. We present turbulent mixing layer (TML) models in which warm photoionized condensations are immersed in a hot supersonic wind. Turbulent dissipation and mixing generate an intermediate region where the gas is accelerated, heated, and mixed. The emission spectrum of such layers is compared with observations of Rand of the DIG in the edge-on spiral NGC 891. We generate two sequence of models that fit the line ratio correlations between [S II]/Hα, [O I]/Hα, [N II]/[S II], and [O III]/Hβ reasonably well. In one sequence of models, the hot wind velocity increases, while in the other, the ionization parameter and layer opacity increase. Despite the success of the mixing layer models, the overall efficiency in reprocessing the stellar UV is much too low, much less than 1%, which compels us to reject the TML model in its present form.

  7. Cohesive and mixed sediment in the Regional Ocean Modeling System (ROMS v3.6) implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST r1234)

    NASA Astrophysics Data System (ADS)

    Sherwood, Christopher R.; Aretxabaleta, Alfredo L.; Harris, Courtney K.; Rinehimer, J. Paul; Verney, Romaric; Ferré, Bénédicte

    2018-05-01

    We describe and demonstrate algorithms for treating cohesive and mixed sediment that have been added to the Regional Ocean Modeling System (ROMS version 3.6), as implemented in the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System (COAWST Subversion repository revision 1234). These include the following: floc dynamics (aggregation and disaggregation in the water column); changes in floc characteristics in the seabed; erosion and deposition of cohesive and mixed (combination of cohesive and non-cohesive) sediment; and biodiffusive mixing of bed sediment. These routines supplement existing non-cohesive sediment modules, thereby increasing our ability to model fine-grained and mixed-sediment environments. Additionally, we describe changes to the sediment bed layering scheme that improve the fidelity of the modeled stratigraphic record. Finally, we provide examples of these modules implemented in idealized test cases and a realistic application.

  8. Modeling of surface temperature effects on mixed material migration in NSTX-U

    NASA Astrophysics Data System (ADS)

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    2016-10-01

    NSTX-U will initially operate with graphite walls, periodically coated with thin lithium films to improve plasma performance. However, the spatial and temporal evolution of these films during and after plasma exposure is poorly understood. The WallDYN global mixed-material surface evolution model has recently been applied to the NSTX-U geometry to simulate the evolution of poloidally inhomogenous mixed C/Li/O plasma-facing surfaces. The WallDYN model couples local erosion and deposition processes with plasma impurity transport in a non-iterative, self-consistent manner that maintains overall material balance. Temperature-dependent sputtering of lithium has been added to WallDYN, utilizing an adatom sputtering model developed from test stand experimental data. Additionally, a simplified temperature-dependent diffusion model has been added to WallDYN so as to capture the intercalation of lithium into a graphite bulk matrix. The sensitivity of global lithium migration patterns to changes in surface temperature magnitude and distribution will be examined. The effect of intra-discharge increases in surface temperature due to plasma heating, such as those observed during NSTX Liquid Lithium Divertor experiments, will also be examined. Work supported by US DOE contract DE-AC02-09CH11466.

  9. Surface wind mixing in the Regional Ocean Modeling System (ROMS)

    NASA Astrophysics Data System (ADS)

    Robertson, Robin; Hartlipp, Paul

    2017-12-01

    Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.

  10. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua

    2015-07-01

    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  11. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  12. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  13. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE PAGES

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...

    2016-08-01

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  14. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  15. Effect of Crumb Rubber and Warm Mix Additives on Asphalt Aging, Rheological, and Failure Properties

    NASA Astrophysics Data System (ADS)

    Agrawal, Prashant

    Asphalt-rubber mixtures have been shown to have useful properties with respect to distresses observed in asphalt concrete pavements. The most notable change in properties is a large increase in viscosity and improved low-temperature cracking resistance. Warm mix additives can lower production and compaction temperatures. Lower temperatures reduce harmful emissions and lower energy consumption, and thus provide environmental benefits and cut costs. In this study, the effects of crumb rubber modification on various asphalts such as California Valley, Boscan, Alaska North Slope, Laguna and Cold Lake were also studied. The materials used for warm mix modification were obtained from various commercial sources. The RAF binder was produced by Imperial Oil in their Nanticoke, Ontario, refinery on Lake Erie. A second commercial PG 52-34 (hereafter denoted as NER) was obtained/sampled during the construction of a northern Ontario MTO contract. Some regular tests such as Dynamic Shear Rheometer (DSR) and Bending Beam Rheometer (BBR), Multiple Stress Creep Recovery (MSCR) and some modified new protocols such as the extended BBR test (LS-308) and the Double-Edge Notched Tension (DENT) test (LS-299) are used to study, the effect of warm mix and a host of other additives on rheological, aging and failure properties. A comparison in the properties of RAF and NER asphalts has also been made as RAF is good quality asphalt and NER is bad quality asphalt. From the studies the effect of additives on chemical and physical hardening tendencies was found to be significant. The asphalt samples tested in this study showed a range of tendencies for chemical and physical hardening.

  16. Comparison of GWAS models to identify non-additive genetic control of flowering time in sunflower hybrids.

    PubMed

    Bonnafous, Fanny; Fievet, Ghislain; Blanchet, Nicolas; Boniface, Marie-Claude; Carrère, Sébastien; Gouzy, Jérôme; Legrand, Ludovic; Marage, Gwenola; Bret-Mestries, Emmanuelle; Munos, Stéphane; Pouilly, Nicolas; Vincourt, Patrick; Langlade, Nicolas; Mangin, Brigitte

    2018-02-01

    This study compares five models of GWAS, to show the added value of non-additive modeling of allelic effects to identify genomic regions controlling flowering time of sunflower hybrids. Genome-wide association studies are a powerful and widely used tool to decipher the genetic control of complex traits. One of the main challenges for hybrid crops, such as maize or sunflower, is to model the hybrid vigor in the linear mixed models, considering the relatedness between individuals. Here, we compared two additive and three non-additive association models for their ability to identify genomic regions associated with flowering time in sunflower hybrids. A panel of 452 sunflower hybrids, corresponding to incomplete crossing between 36 male lines and 36 female lines, was phenotyped in five environments and genotyped for 2,204,423 SNPs. Intra-locus effects were estimated in multi-locus models to detect genomic regions associated with flowering time using the different models. Thirteen quantitative trait loci were identified in total, two with both model categories and one with only non-additive models. A quantitative trait loci on LG09, detected by both the additive and non-additive models, is located near a GAI homolog and is presented in detail. Overall, this study shows the added value of non-additive modeling of allelic effects for identifying genomic regions that control traits of interest and that could participate in the heterosis observed in hybrids.

  17. Mixed-effects models for estimating stand volume by means of small footprint airborne laser scanner data.

    Treesearch

    J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch

    2008-01-01

    For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...

  18. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  19. Characteristics of the mixing volume model with the interactions among spatially distributed particles for Lagrangian simulations of turbulent mixing

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2016-11-01

    The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.

  20. An S 4 model inspired from self-complementary neutrino mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Xinyi

    2018-03-01

    We build an S 4 model for neutrino masses and mixings based on the self-complementary (SC) neutrino mixing pattern. The SC mixing is constructed from the self-complementarity relation plus {δ }CP}=-\\tfrac{π }{2}. We elaborately construct the model at a percent level of accuracy to reproduce the structure given by the SC mixing. After performing a numerical study on the model’s parameter space, we find that in the case of normal ordering, the model can give predictions on the observables that are compatible with their 3σ ranges, and give predictions for the not-yet observed quantities like the lightest neutrino mass m 1 ∈ [0.003, 0.010] eV and the Dirac CP violating phase {δ }CP}\\in [256.72^\\circ ,283.33^\\circ ].

  1. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omotowa, O. A.; Skifton, R.; Tokuhiro, A.

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-{epsilon} turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental datamore » reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) {Delta}T{sub hc}= 5 deg. C, 10 deg. C and velocity ratios U{sub c}/U{sub h} = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)« less

  2. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  3. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  4. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F { X ( t ), t } where F (·,·) is an unknown regression function and X ( t ) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F (·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X ( t ) is a signal from diffusion tensor imaging at position, t , along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  5. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  6. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase

  7. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  8. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  9. Mixed-order phase transition in a one-dimensional model.

    PubMed

    Bar, Amir; Mukamel, David

    2014-01-10

    We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.

  10. Linear mixing model applied to coarse resolution satellite data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  11. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    NASA Astrophysics Data System (ADS)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  12. Red cell storage in E-Sol 5 and Adsol additive solutions: paired comparison using mixed and non-mixed study designs.

    PubMed

    Radwanski, K; Thill, M; Min, K

    2014-05-01

    If transfusion of older stored red cells is found to negatively affect clinical outcome, one possible alternative to shortened outdate is the use of new additive solutions (AS) that ameliorate the storage lesion. Erythro-Sol (E-Sol), a previously developed next-generation AS, has been reformulated into E-Sol 5, which is compatible with current anticoagulants and AS volumes. The effect of E-Sol 5 on red cells during storage compared to current AS has not been reported. Paired, ABO-matched whole-blood units were collected into CPD anticoagulant, pooled, split and processed into plasma and red cell units with either 110 ml of Adsol or 105 ml of E-Sol 5 within 8 h of collection. In Study 1, paired units in E-Sol 5 and Adsol were sampled on Day 0 and every 7 days up to Day 42 (n = 10). In Study 2, paired units in E-Sol 5 and Adsol were sampled only on Day 0 and Day 42 (n = 10). In Study 1, 2,3 DPG levels were maintained until Day 28 in E-Sol 5 units and Day 14 in Adsol units. ATP levels were higher in E-Sol 5 units until Day 21, after which they were comparable between the two groups. In both studies, metabolic activity was greater in E-Sol 5 units with respect to glucose consumption and lactate production. Morphology scores were higher, and haemolysis and microparticles generated were lower in E-Sol 5 vs. Adsol units. Weekly mixing of units lowered haemolysis and microparticle levels and increased potassium content on Day 42 in both additive solutions. Regardless of whether units are mixed weekly or are stored non-mixed, E-Sol 5 slows the progression of the red cell storage lesion and improves the overall in vitro quality of RBC throughout storage. © 2013 International Society of Blood Transfusion.

  13. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  14. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two-dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  15. Evaluation of a hybrid kinetics/mixing-controlled combustion model for turbulent premixed and diffusion combustion using KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee; Wey, Ming-Jyh

    1990-01-01

    Two dimensional calculations were made of spark ignited premixed-charge combustion and direct injection stratified-charge combustion in gasoline fueled piston engines. Results are obtained using kinetic-controlled combustion submodel governed by a four-step global chemical reaction or a hybrid laminar kinetics/mixing-controlled combustion submodel that accounts for laminar kinetics and turbulent mixing effects. The numerical solutions are obtained by using KIVA-2 computer code which uses a kinetic-controlled combustion submodel governed by a four-step global chemical reaction (i.e., it assumes that the mixing time is smaller than the chemistry). A hybrid laminar/mixing-controlled combustion submodel was implemented into KIVA-2. In this model, chemical species approach their thermodynamics equilibrium with a rate that is a combination of the turbulent-mixing time and the chemical-kinetics time. The combination is formed in such a way that the longer of the two times has more influence on the conversion rate and the energy release. An additional element of the model is that the laminar-flame kinetics strongly influence the early flame development following ignition.

  16. The Apollo 16 regolith - A petrographically-constrained chemical mixing model

    NASA Technical Reports Server (NTRS)

    Kempa, M. J.; Papike, J. J.; White, C.

    1980-01-01

    A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.

  17. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  18. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  19. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  20. Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models

    PubMed Central

    Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya

    2016-01-01

    Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182

  1. Model free simulations of a high speed reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1992-01-01

    The effects of compressibility, chemical reaction exothermicity and non-equilibrium chemical modeling in a combusting plane mixing layer were investigated by means of two-dimensional model free numerical simulations. It was shown that increased compressibility generally had a stabilizing effect, resulting in reduced mixing and chemical reaction conversion rate. The appearance of 'eddy shocklets' in the flow was observed at high convective Mach numbers. Reaction exothermicity was found to enhance mixing at the initial stages of the layer's growth, but had a stabilizing effect at later times. Calculations were performed for a constant rate chemical rate kinetics model and an Arrhenius type kinetics prototype. The Arrhenius model was found to cause a greater temperature increase due to reaction than the constant kinetics model. This had the same stabilizing effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  2. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  3. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  4. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  5. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  6. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  7. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    PubMed Central

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  8. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  9. An a priori DNS study of the shadow-position mixing model

    DOE PAGES

    Zhao, Xin -Yu; Bhagatwala, Ankit; Chen, Jacqueline H.; ...

    2016-01-15

    In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of themore » shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce

  10. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentrationmore » retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic

  11. The use of Argo for validation and tuning of mixed layer models

    NASA Astrophysics Data System (ADS)

    Acreman, D. M.; Jeffery, C. D.

    We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.

  12. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  13. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  14. Modeling condensation with a noncondensable gas for mixed convection flow

    NASA Astrophysics Data System (ADS)

    Liao, Yehong

    2007-05-01

    This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface

  15. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  16. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  17. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  18. Modelling of subgrid-scale phenomena in supercritical transitional mixing layers: an a priori study

    NASA Astrophysics Data System (ADS)

    Selle, Laurent C.; Okong'o, Nora A.; Bellan, Josette; Harstad, Kenneth G.

    A database of transitional direct numerical simulation (DNS) realizations of a supercritical mixing layer is analysed for understanding small-scale behaviour and examining subgrid-scale (SGS) models duplicating that behaviour. Initially, the mixing layer contains a single chemical species in each of the two streams, and a perturbation promotes roll-up and a double pairing of the four spanwise vortices initially present. The database encompasses three combinations of chemical species, several perturbation wavelengths and amplitudes, and several initial Reynolds numbers specifically chosen for the sole purpose of achieving transition. The DNS equations are the Navier-Stokes, total energy and species equations coupled to a real-gas equation of state; the fluxes of species and heat include the Soret and Dufour effects. The large-eddy simulation (LES) equations are derived from the DNS ones through filtering. Compared to the DNS equations, two types of additional terms are identified in the LES equations: SGS fluxes and other terms for which either assumptions or models are necessary. The magnitude of all terms in the LES conservation equations is analysed on the DNS database, with special attention to terms that could possibly be neglected. It is shown that in contrast to atmospheric-pressure gaseous flows, there are two new terms that must be modelled: one in each of the momentum and the energy equations. These new terms can be thought to result from the filtering of the nonlinear equation of state, and are associated with regions of high density-gradient magnitude both found in DNS and observed experimentally in fully turbulent high-pressure flows. A model is derived for the momentum-equation additional term that performs well at small filter size but deteriorates as the filter size increases, highlighting the necessity of ensuring appropriate grid resolution in LES. Modelling approaches for the energy-equation additional term are proposed, all of which may be too

  19. Development of a Medicaid Behavioral Health Case-Mix Model

    ERIC Educational Resources Information Center

    Robst, John

    2009-01-01

    Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…

  20. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  1. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  2. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Digestible, metabolizable, and net energy of camelina cake fed to growing pigs and additivity of energy in mixed diets.

    PubMed

    Kim, J W; Koo, B; Nyachoti, C M

    2017-09-01

    This experiment was conducted to determine the DE, ME, and NE contents of camelina cake (CC) and to test the hypothesis that dietary glucosinolates originating from CC will affect the additivity of energy in mixed diets containing different inclusion levels of corn, soybean meal (SBM), and CC. A total of 30 growing barrows ([Yorkshire × Landrace] × Duroc) with a mean BW of 16.8 kg (SD 1.4) were randomly allotted to 1 of 5 treatments with 6 replicates per treatment. Pigs were fed experimental diets for 16 d, including 10 d for adaptation and 6 d for total collection of feces and urine. The 5 experimental diets consisted of 3 corn-based diets to determine the DE, ME, and NE of the 3 ingredients (corn, SBM, and CC) and 2 mixed diets to test the additivity of DE, ME, and NE. The corn diet contained 97.52% corn; the SBM diet contained 67.52% corn and 30.0% SBM; the CC diet contained 67.52% corn and 30.0% CC; the Mixed diet 1 contained 67.52% corn, 20.0% SBM, and 10.0% CC; and the Mixed diet 2 contained 67.25% corn, 10.0% SBM, and 20.0% CC. Vitamins and minerals were included in the diets to meet or exceed the requirements for growing pigs (). Pigs were fed their assigned diets at 550 kcal ME/kg BW per day on the basis of BW on d 1, 5, and 10, which was close to ad libitum intake. Pigs had free access to water. Determined DE, ME, and NE contents of corn were 3,348, 3,254, and 2,579 kcal/kg, respectively; those of SBM were 3,626, 3,405, and 2,129 kcal/kg, respectively; and those of CC were 3,755, 3,465, and 2,383 kcal/kg, respectively. No differences between the predicted and determined DE, ME, and NE were observed in the 2 mixed diets. In conclusion, DE, ME, and calculated NE content of CC fed to growing pigs were 3,755, 3,465, and 2,383 kcal/kg (as-fed basis), respectively. In addition, additivity of DE, ME, and calculated NE was observed in the mixed diets containing corn, SBM, and CC, which indicates that dietary glucosinolates originating from up to 30% of CC

  5. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  6. Trends in stratospheric ozone profiles using functional mixed models

    NASA Astrophysics Data System (ADS)

    Park, A. Y.; Guillas, S.; Petropavlovskikh, I.

    2013-05-01

    This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkher ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed as it penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data driven basis functions are obtained. Secondly we estimate the effects of covariates - month, year (trend), quasi biennial oscillation, the Solar cycle, arctic oscillation and the El Niño/Southern Oscillation cycle - on the principal component scores of ozone profiles over time using generalized additive models. The effects are smooth functions of the covariates, and are represented by knot-based regression cubic splines. Finally we employ generalized additive mixed effects models incorporating a more complex error structure that reflects the observed seasonality in the data. The analysis provides more accurate estimates of influences and trends, together with enhanced uncertainty quantification. We are able to capture fine variations in the time evolution of the profiles such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder. The strongly declining trends over 2003-2011 for altitudes of 32-64 hPa show that stratospheric ozone is not yet fully recovering.

  7. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  8. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  9. Using Bayesian Stable Isotope Mixing Models to Enhance Marine Ecosystem Models

    EPA Science Inventory

    The use of stable isotopes in food web studies has proven to be a valuable tool for ecologists. We investigated the use of Bayesian stable isotope mixing models as constraints for an ecosystem model of a temperate seagrass system on the Atlantic coast of France. δ13C and δ15N i...

  10. Statistical modelling of growth using a mixed model with orthogonal polynomials.

    PubMed

    Suchocki, T; Szyda, J

    2011-02-01

    In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.

  11. Mixed ice accretion on aircraft wings

    NASA Astrophysics Data System (ADS)

    Janjua, Zaid A.; Turnbull, Barbara; Hibberd, Stephen; Choi, Kwing-So

    2018-02-01

    Ice accretion is a problematic natural phenomenon that affects a wide range of engineering applications including power cables, radio masts, and wind turbines. Accretion on aircraft wings occurs when supercooled water droplets freeze instantaneously on impact to form rime ice or runback as water along the wing to form glaze ice. Most models to date have ignored the accretion of mixed ice, which is a combination of rime and glaze. A parameter we term the "freezing fraction" is defined as the fraction of a supercooled droplet that freezes on impact with the top surface of the accretion ice to explore the concept of mixed ice accretion. Additionally we consider different "packing densities" of rime ice, mimicking the different bulk rime densities observed in nature. Ice accretion is considered in four stages: rime, primary mixed, secondary mixed, and glaze ice. Predictions match with existing models and experimental data in the limiting rime and glaze cases. The mixed ice formulation however provides additional insight into the composition of the overall ice structure, which ultimately influences adhesion and ice thickness, and shows that for similar atmospheric parameter ranges, this simple mixed ice description leads to very different accretion rates. A simple one-dimensional energy balance was solved to show how this freezing fraction parameter increases with decrease in atmospheric temperature, with lower freezing fraction promoting glaze ice accretion.

  12. Chandra Observations and Models of the Mixed Morphology Supernova Remnant W44: Global Trends

    NASA Technical Reports Server (NTRS)

    Shelton, R. L.; Kuntz, K. D.; Petre, R.

    2004-01-01

    We report on the Chandra observations of the archetypical mixed morphology (or thermal composite) supernova remnant, W44. As with other mixed morphology remnants, W44's projected center is bright in thermal X-rays. It has an obvious radio shell, but no discernable X-ray shell. In addition, X-ray bright knots dot W44's image. The spectral analysis of the Chandra data show that the remnant s hot, bright projected center is metal-rich and that the bright knots are regions of comparatively elevated elemental abundances. Neon is among the affected elements, suggesting that ejecta contributes to the abundance trends. Furthermore, some of the emitting iron atoms appear to be underionized with respect to the other ions, providing the first potential X-ray evidence for dust destruction in a supernova remnant. We use the Chandra data to test the following explanations for W44's X-ray bright center: 1.) entropy mixing due to bulk mixing or thermal conduction, 2.) evaporation of swept up clouds, and 3.) a metallicity gradient, possibly due to dust destruction and ejecta enrichment. In these tests, we assume that the remnant has evolved beyond the adiabatic evolutionary stage, which explains the X-ray dimness of the shell. The entropy mixed model spectrum was tested against the Chandra spectrum for the remnant's projected center and found to be a good match. The evaporating clouds model was constrained by the finding that the ionization parameters of the bright knots are similar to those of the surrounding regions. While both the entropy mixed and the evaporating clouds models are known to predict centrally bright X-ray morphologies, their predictions fall short of the observed brightness gradient. The resulting brightness gap can be largely filled in by emission from the extra metals in and near the remnant's projected center. The preponderance of evidence (including that drawn from other studies) suggests that W44's remarkable morphology can be attributed to dust destruction

  13. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  14. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  15. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    NASA Astrophysics Data System (ADS)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  16. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, R; Gallagher, B; Neville, J

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less

  17. Estimates of lake trout (Salvelinus namaycush) diet in Lake Ontario using two and three isotope mixing models

    USGS Publications Warehouse

    Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.

    2016-01-01

    Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.

  18. Estimating the numerical diapycnal mixing in the GO5.0 ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George

    2014-05-01

    Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.

  19. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  1. Modelling ice microphysics of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  2. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  3. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  4. Conservative mixing, competitive mixing and their applications

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2010-12-01

    In many of the models applied to simulations of turbulent transport and turbulent combustion, the mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. Stochastic particles with properties and mixing can be used not only for simulating turbulent combustion, but also for modeling a large spectrum of physical phenomena. Traditional mixing, which is commonly used in the modeling of turbulent reacting flows, is conservative: the total amount of scalar is (or should be) preserved during a mixing event. It is worthwhile, however, to consider a more general mixing that does not possess these conservative properties; hence, our consideration lies beyond traditional mixing. In non-conservative mixing, the particle post-mixing average becomes biased towards one of the particles participating in mixing. The extreme form of non-conservative mixing can be called competitive mixing or competition: after a mixing event, the loser particle simply receives the properties of the winner particle. Particles with non-conservative mixing can be used to emulate various phenomena involving competition. In particular, we investigate cyclic behavior that can be attributed to complex competing systems. We show that the localness and intransitivity of competitive mixing are linked to the cyclic behavior.

  5. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  6. Dielectric properties and microstructure of sintered BaTiO3 fabricated by using mixed 150-nm and 80-nm powders with various additives

    NASA Astrophysics Data System (ADS)

    Oh, Min Wook; Kang, Jae Won; Yeo, Dong Hun; Shin, Hyo Soon; Jeong, Dae Yong

    2015-04-01

    Recently, the use of small-sized BaTiO3 particles for ultra-thin MLCC research has increased as a method for minimizing the dielectric layer's thickness in thick film process. However, when particles smaller than 100 nm are used, the reduced particle size leads to a reduced dielectric constant. The use of nanoparticles, therefore, requires an increase in the amount of additive used due to the increase in the specific surface area, thus increasing the production cost. In this study, a novel method of coating 150-nm and 80-nm BaTiO3 powders with additives and mixing them together was employed, taking advantage of the effect obtained through the use of BaTiO3 particles smaller than 100 nm, to conveniently obtain the desired dielectric constant and thermal characteristics. Also, the microstructure and the dielectric properties were evaluated. The additives Dy, Mn, Mg, Si, and Cr were coated on a 150-nm powder, and the additives Dy, Mn, Mg, and Si were coated on 80-nm powder, followed by mixing at a ratio of 1:1. As a result, the microstructure revealed grain formation according to the liquid-phase additive Si; additionally, densification was well realized. However, non-reducibility was not obtained, and the material became a semiconductor. When the amount of added Mn in the 150-nm powder was increased to 0.2 and 0.3 mol%, insignificant changes in the microstructure were observed, and the bulk density after mixing was found to have increased drastically in comparison to that before mixing. Also, non-reducibility was obtained for certain conditions. The dielectric property was found to be consistent with the densification and the grain size. The mixed composition #1-0.3 had a dielectric constant over 2000, and the result somewhat satisfied the dielectric constant temperature dependency for X6S.

  7. Additional asphalt to increase the durability of Virginia's superpave surface mixes.

    DOT National Transportation Integrated Search

    2003-01-01

    Although Superpave has been successful in preventing rutting, many believe that the design asphalt content needs fine-tuning to produce durable mixes. This investigation used various laboratory tests to test samples of field surface mixes (12.5 mm an...

  8. Radiotracer Technology in Mixing Processes for Industrial Applications

    PubMed Central

    Othman, N.; Kamarudin, S. K.

    2014-01-01

    Many problems associated with the mixing process remain unsolved and result in poor mixing performance. The residence time distribution (RTD) and the mixing time are the most important parameters that determine the homogenisation that is achieved in the mixing vessel and are discussed in detail in this paper. In addition, this paper reviews the current problems associated with conventional tracers, mathematical models, and computational fluid dynamics simulations involved in radiotracer experiments and hybrid of radiotracer. PMID:24616642

  9. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  10. Estimation water vapor content using the mixing ratio method and validated with the ANFIS PWV model

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Alhasa, K. M.; Singh, M. S. J.

    2017-05-01

    This study reported the comparison between water vapor content, the surface meteorological data (pressure, temperature, and relative humidity), and precipitable water vapor (PWV) produced by PWV from adaptive neuro fuzzy inference system (ANFIS) for areas in the Universiti Kebangsaan Malaysia Bangi (UKMB) station. The water vapor content value was estimated with mixing ratio method and the surface meteorological data as the parameter inputs. The accuracy of water vapor content was validated with PWV from ANFIS PWV model for the period of 20-23 December 2016. The result showed that the water vapor content has a similar trend with the PWV which produced by ANFIS PWV model (r = 0.975 at the 99% confidence level). This indicates that the water vapor content that obtained with mixing ratio agreed very well with the ANFIS PWV model. In addition, this study also found, the pattern of water vapor content and PWV have more influenced by the relative humidity.

  11. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  12. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    PubMed

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  13. Development of an unresolved CFD-DEM model for the flow of viscous suspensions and its application to solid-liquid mixing

    NASA Astrophysics Data System (ADS)

    Blais, Bruno; Lassaigne, Manon; Goniva, Christoph; Fradette, Louis; Bertrand, François

    2016-08-01

    Although viscous solid-liquid mixing plays a key role in the industry, the vast majority of the literature on the mixing of suspensions is centered around the turbulent regime of operation. However, the laminar and transitional regimes face considerable challenges. In particular, it is important to know the minimum impeller speed (Njs) that guarantees the suspension of all particles. In addition, local information on the flow patterns is necessary to evaluate the quality of mixing and identify the presence of dead zones. Multiphase computational fluid dynamics (CFD) is a powerful tool that can be used to gain insight into local and macroscopic properties of mixing processes. Among the variety of numerical models available in the literature, which are reviewed in this work, unresolved CFD-DEM, which combines CFD for the fluid phase with the discrete element method (DEM) for the solid particles, is an interesting approach due to its accurate prediction of the granular dynamics and its capability to simulate large amounts of particles. In this work, the unresolved CFD-DEM method is extended to viscous solid-liquid flows. Different solid-liquid momentum coupling strategies, along with their stability criteria, are investigated and their accuracies are compared. Furthermore, it is shown that an additional sub-grid viscosity model is necessary to ensure the correct rheology of the suspensions. The proposed model is used to study solid-liquid mixing in a stirred tank equipped with a pitched blade turbine. It is validated qualitatively by comparing the particle distribution against experimental observations, and quantitatively by compairing the fraction of suspended solids with results obtained via the pressure gauge technique.

  14. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  15. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  16. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  17. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous

  18. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random

  19. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  20. An explicit mixed numerical method for mesoscale model

    NASA Technical Reports Server (NTRS)

    Hsu, H.-M.

    1981-01-01

    A mixed numerical method has been developed for mesoscale models. The technique consists of a forward difference scheme for time tendency terms, an upstream scheme for advective terms, and a central scheme for the other terms in a physical system. It is shown that the mixed method is conditionally stable and highly accurate for approximating the system of either shallow-water equations in one dimension or primitive equations in three dimensions. Since the technique is explicit and two time level, it conserves computer and programming resources.

  1. Computation of turbulent high speed mixing layers using a two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Sekar, B.

    1991-01-01

    A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.

  2. Color Addition and Subtraction Apps

    NASA Astrophysics Data System (ADS)

    Ruiz, Frances; Ruiz, Michael J.

    2015-10-01

    Color addition and subtraction apps in HTML5 have been developed for students as an online hands-on experience so that they can more easily master principles introduced through traditional classroom demonstrations. The evolution of the additive RGB color model is traced through the early IBM color adapters so that students can proceed step by step in understanding mathematical representations of RGB color. Finally, color addition and subtraction are presented for the X11 colors from web design to illustrate yet another real-life application of color mixing.

  3. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  4. The salinity effect in a mixed layer ocean model

    NASA Technical Reports Server (NTRS)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  5. BDA special care case mix model.

    PubMed

    Bateman, P; Arnold, C; Brown, R; Foster, L V; Greening, S; Monaghan, N; Zoitopoulos, L

    2010-04-10

    Routine dental care provided in special care dentistry is complicated by patient specific factors which increase the time taken and costs of treatment. The BDA have developed and conducted a field trial of a case mix tool to measure this complexity. For each episode of care the case mix tool assesses the following on a four point scale: 'ability to communicate', 'ability to cooperate', 'medical status', 'oral risk factors', 'access to oral care' and 'legal and ethical barriers to care'. The tool is reported to be easy to use and captures sufficient detail to discriminate between types of service and special care dentistry provided. It offers potential as a simple to use and clinically relevant source of performance management and commissioning data. This paper describes the model, demonstrates how it is currently being used, and considers future developments in its use.

  6. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  7. Aggregation of gluten proteins in model dough after fibre polysaccharide addition.

    PubMed

    Nawrocka, Agnieszka; Szymańska-Chargot, Monika; Miś, Antoni; Wilczewska, Agnieszka Z; Markiewicz, Karolina H

    2017-09-15

    FT-Raman spectroscopy, thermogravimetry and differential scanning calorimetry were used to study changes in structure of gluten proteins and their thermal properties influenced by four dietary fibre polysaccharides (microcrystalline cellulose, inulin, apple pectin and citrus pectin) during development of a model dough. The flour reconstituted from wheat starch and wheat gluten was mixed with the polysaccharides in five concentrations: 3%, 6%, 9%, 12% and 18%. The obtained results showed that all polysaccharides induced similar changes in secondary structure of gluten proteins concerning formation of aggregates (1604cm -1 ), H-bonded parallel- and antiparallel-β-sheets (1690cm -1 ) and H-bonded β-turns (1664cm -1 ). These changes concerned mainly glutenins since β-structures are characteristic for them. The observed structural changes confirmed hypothesis about partial dehydration of gluten network after polysaccharides addition. The gluten aggregation and dehydration processes were also reflected in the DSC results, while the TGA ones showed that gluten network remained thermally stable after polysaccharides addition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Validation of an ocean shelf model for the prediction of mixed-layer properties in the Mediterranean Sea west of Sardinia

    NASA Astrophysics Data System (ADS)

    Onken, Reiner

    2017-04-01

    The Regional Ocean Modeling System (ROMS) has been employed to explore the sensitivity of the forecast skill of mixed-layer properties to initial conditions, boundary conditions, and vertical mixing parameterisations. The initial and lateral boundary conditions were provided by the Mediterranean Forecasting System (MFS) or by the MERCATOR global ocean circulation model via one-way nesting; the initial conditions were additionally updated through the assimilation of observations. Nowcasts and forecasts from the weather forecast models COSMO-ME and COSMO-IT, partly melded with observations, served as surface boundary conditions. The vertical mixing was parameterised by the GLS (generic length scale) scheme Umlauf and Burchard (2003) in four different set-ups. All ROMS forecasts were validated against the observations which were taken during the REP14-MED survey to the west of Sardinia. Nesting ROMS in MERCATOR and updating the initial conditions through data assimilation provided the best agreement of the predicted mixed-layer properties with the time series from a moored thermistor chain. Further improvement was obtained by the usage of COSMO-ME atmospheric forcing, which was melded with real observations, and by the application of the k-ω vertical mixing scheme with increased vertical eddy diffusivity. The predicted temporal variability of the mixed-layer temperature was reasonably well correlated with the observed variability, while the modelled variability of the mixed-layer depth exhibited only agreement with the observations near the diurnal frequency peak. For the forecasted horizontal variability, reasonable agreement was found with observations from a ScanFish section, but only for the mesoscale wave number band; the observed sub-mesoscale variability was not reproduced by ROMS.

  9. Mixed layers of sodium caseinate + dextran sulfate: influence of order of addition to oil-water interface.

    PubMed

    Jourdain, Laureline S; Schmitt, Christophe; Leser, Martin E; Murray, Brent S; Dickinson, Eric

    2009-09-01

    We report on the interfacial properties of electrostatic complexes of protein (sodium caseinate) with a highly sulfated polysaccharide (dextran sulfate). Two routes were investigated for preparation of adsorbed layers at the n-tetradecane-water interface at pH = 6. Bilayers were made by the layer-by-layer deposition technique whereby polysaccharide was added to a previously established protein-stabilized interface. Mixed layers were made by the conventional one-step method in which soluble protein-polysaccharide complexes were adsorbed directly at the interface. Protein + polysaccharide systems gave a slower decay of interfacial tension and stronger dilatational viscoelastic properties than the protein alone, but there was no significant difference in dilatational properties between mixed layers and bilayers. Conversely, shear rheology experiments exhibited significant differences between the two kinds of interfacial layers, with the mixed system giving much stronger interfacial films than the bilayer system, i.e., shear viscosities and moduli at least an order of magnitude higher. The film shear viscoelasticity was further enhanced by acidification of the biopolymer mixture to pH = 2 prior to interface formation. Taken together, these measurements provide insight into the origin of previously reported differences in stability properties of oil-in-water emulsions made by the bilayer and mixed layer approaches. Addition of a proteolytic enzyme (trypsin) to both types of interfaces led to a significant increase in the elastic modulus of the film, suggesting that the enzyme was adsorbed at the interface via complexation with dextran sulfate. Overall, this study has confirmed the potential of shear rheology as a highly sensitive probe of associative electrostatic interactions and interfacial structure in mixed biopolymer layers.

  10. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  11. Comparing Bayesian stable isotope mixing models: Which tools are best for sediments?

    NASA Astrophysics Data System (ADS)

    Morris, David; Macko, Stephen

    2016-04-01

    Bayesian stable isotope mixing models have received much attention as a means of coping with multiple sources and uncertainty in isotope ecology (e.g. Phillips et al., 2014), enabling the probabilistic determination of the contributions made by each food source to the total diet of the organism in question. We have applied these techniques to marine sediments for the first time. The sediments of the Chukchi Sea and Beaufort Sea offer an opportunity to utilize these models for organic geochemistry, as there are three likely sources of organic carbon; pelagic phytoplankton, sea ice algae and terrestrial material from rivers and coastal erosion, as well as considerable variation in the marine δ13C values. Bayesian mixing models using bulk δ13C and δ15N data from Shelf Basin Interaction samples allow for the probabilistic determination of the contributions made by each of the sources to the organic carbon budget, and can be compared with existing source contribution estimates based upon biomarker models (e.g. Belicka & Harvey, 2009, Faux, Belicka, & Rodger Harvey, 2011). The δ13C of this preserved material varied from -22.1 to -16.7‰ (mean -19.4±1.3‰), while δ15N varied from 4.1 to 7.6‰ (mean 5.7±1.1‰). Using the SIAR model, we found that water column productivity was the source of between 50 and 70% of the organic carbon buried in this portion of the western Arctic with the remainder mainly supplied by sea ice algal productivity (25-35%) and terrestrial inputs (15%). With many mixing models now available, this study will compare SIAR with MixSIAR and the new FRUITS model. Monte Carlo modeling of the mixing polygon will be used to validate the models, and hierarchical models will be utilised to glean more information from the data set.

  12. Mixed Phase Modeling in GlennICE with Application to Engine Icing

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.

    2011-01-01

    A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.

  13. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  14. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems

  15. EuroForMix: An open source software based on a continuous model to evaluate STR DNA profiles from a mixture of contributors with artefacts.

    PubMed

    Bleka, Øyvind; Storvik, Geir; Gill, Peter

    2016-03-01

    We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    NASA Astrophysics Data System (ADS)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  17. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  18. A mixed model framework for teratology studies.

    PubMed

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  19. Evaluation of Aerosol Mixing State Classes in the GISS Modele-matrix Climate Model Using Single-particle Mass Spectrometry Measurements

    NASA Technical Reports Server (NTRS)

    Bauer, Susanne E.; Ault, Andrew; Prather, Kimberly A.

    2013-01-01

    Aerosol particles in the atmosphere are composed of multiple chemical species. The aerosol mixing state, which describes how chemical species are mixed at the single-particle level, provides critical information on microphysical characteristics that determine the interaction of aerosols with the climate system. The evaluation of mixing state has become the next challenge. This study uses aerosol time-of-flight mass spectrometry (ATOFMS) data and compares the results to those of the Goddard Institute for Space Studies modelE-MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) model, a global climate model that includes a detailed aerosol microphysical scheme. We use data from field campaigns that examine a variety of air mass regimens (urban, rural, and maritime). At all locations, polluted areas in California (Riverside, La Jolla, and Long Beach), a remote location in the Sierra Nevada Mountains (Sugar Pine) and observations from Jeju (South Korea), the majority of aerosol species are internally mixed. Coarse aerosol particles, those above 1 micron, are typically aged, such as coated dust or reacted sea-salt particles. Particles below 1 micron contain large fractions of organic material, internally-mixed with sulfate and black carbon, and few external mixtures. We conclude that observations taken over multiple weeks characterize typical air mass types at a given location well; however, due to the instrumentation, we could not evaluate mass budgets. These results represent the first detailed comparison of single-particle mixing states in a global climate model with real-time single-particle mass spectrometry data, an important step in improving the representation of mixing state in global climate models.

  20. Best practices for use of stable isotope mixing models in food-web studies

    EPA Science Inventory

    Stable isotope mixing models are increasingly used to quantify contributions of resources to consumers. While potentially powerful tools, these mixing models have the potential to be misused, abused, and misinterpreted. Here we draw on our collective experiences to address the qu...

  1. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  2. Mixing parametrizations for ocean climate modelling

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir

    2016-04-01

    The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model

  3. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  4. Metapopulation epidemic models with heterogeneous mixing and travel behaviour

    PubMed Central

    2014-01-01

    Background Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. Methods We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. Results We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in

  5. Metapopulation epidemic models with heterogeneous mixing and travel behaviour.

    PubMed

    Apolloni, Andrea; Poletto, Chiara; Ramasco, José J; Jensen, Pablo; Colizza, Vittoria

    2014-01-13

    Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in the activity of each group

  6. Efficacy of fibre additions to flatbread flour mixes for reducing post-meal glucose and insulin responses in healthy Indian subjects.

    PubMed

    Boers, Hanny M; MacAulay, Katrina; Murray, Peter; Dobriyal, Rajendra; Mela, David J; Spreeuwenberg, Maria A M

    2017-02-01

    The incidence of type 2 diabetes mellitus (T2DM) is increasing worldwide, including in developing countries, particularly in South Asia. Intakes of foods generating a high postprandial glucose (PPG) response have been positively associated with T2DM. As part of efforts to identify effective and feasible strategies to reduce the glycaemic impact of carbohydrate-rich staples, we previously found that addition of guar gum (GG) and chickpea flour (CPF) to wheat flour could significantly reduce the PPG response to flatbread products. On the basis of the results of an exploratory study with Caucasian subjects, we have now tested the effect of additions of specific combinations of CPF with low doses of GG to a flatbread flour mix for their impacts on PPG and postprandial insulin (PPI) responses in a South-Asian population. In a randomised, placebo-controlled full-cross-over design, fifty-six healthy Indian adults consumed flatbreads made with a commercial flatbread mix (100 % wheat flour) with no further additions (control) or incorporating 15 % CPF in combination with 2, 3 or 4 % GG. The flatbreads with CPF and 3 or 4 % GG significantly reduced PPG (both ≥15 % reduction in positive incremental AUC, P<0·01) and PPI (both ≥28 % reduction in total AUC, P<0·0001) compared with flatbreads made from control flour. These results confirm the efficacy and feasibility of the addition of CPF with GG to flatbread flour mixes to achieve significant reductions in both PPG and PPI in Indian subjects.

  7. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  8. Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model

    NASA Astrophysics Data System (ADS)

    Prakash, K. R.; Nigam, T.; Pant, V.

    2017-12-01

    Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.

  9. Asphalt additives in thick hot mixed asphalt-concrete pavements. Research report (Interim), Sep 86-Oct 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Button, J.W.; Prapnnachari, S.

    Asphalt concrete field test pavements were placed in District 19 north of Texarkana on US-59/71 in 1987 and 1988 to evaluate the ability of certain asphalt additives to enhance resistance to cracking and rutting. Two 10-inch thick and 0.9 mile (approx.) long test pavements and a similar untreated control section were constructed in the northbound and southbound lanes for a total of 6 field trials. Asphalt additives were incorporated in both the 8-inch base and the overlying 2-inch surface layers. The additives evaluated included Goodyear LPF 5812, Chemkrete-CTI 102, Exxon Polybilt 102, and Styrelf 13. Samples of paving materials includingmore » aggregates, asphalts, compacted mixes, and pavement cores were collected, conveyed to the laboratory, and tested to provide detailed documentation of their properties. Tests included rheological properties of the binders before and after artificial aging, characterization of aggregate, Hveem and Marshall stability, stiffness as a function of temperature, tensile properties before and after moisture conditioning and artificial aging, air void content, creep, and permanent deformation. Field tests and visual evaluations have been conducted to objectively evaluate field performance. Results of these tests are reported herein. Within 6 months after construction of the base layers and prior to placement of the surface course, the Chemkrete modified base became severely cracked. As a result, the surface mix placed on this base section was treated with Goodyear latex rather than Chemkrete. All other modified pavements and the control section have performed well and exhibited essentially equivalent performance after 2 1/2 years in service.« less

  10. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  11. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  12. Mapping eQTL Networks with Mixed Graphical Markov Models

    PubMed Central

    Tur, Inma; Roverato, Alberto; Castelo, Robert

    2014-01-01

    Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene-expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular, and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype. In this article we approach this challenge with mixed graphical Markov models, higher-order conditional independences, and q-order correlation graphs. These models show that additive genetic effects propagate through the network as function of gene–gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes. PMID:25271303

  13. Public private mix model in enhancing tuberculosis case detection in District Thatta, Sindh, Pakistan.

    PubMed

    Ahmed, Jameel; Ahmed, Mubashir; Laghari, A; Lohana, Wasdev; Ali, Sajid; Fatmi, Zafar

    2009-02-01

    To enhance the TB case detection through Public Private Mix (PPM) model by involving private practitioners in collaboration with National TB Control Program, (NTP) in district Thatta. Private practitioners (PPs) of district Thatta involved in treatment of TB cases were requested to participate in the study. All consenting physicians were provided with training on Directly Observed Treatment Short course (DOTS) module. In addition to routine cases, TB cases diagnosed by private practitioners through sputum microscopy were also registered with the district TB control program and medicines were provided by NTP. After intervention of PPM-DOTS change in Case Detection Rate (CDR) were estimated. An increased number of sputum smear positive cases were found in the intervention period--the third quarter of 2007, from 188 to 211 and CDR from 69% to 77%. The improvement in case detection rate was significant as this moderately added to the total number of cases detected from the whole of the district Thatta during the study period. Public private mix (PPM) model was effective in increasing the CDR of TB cases in district Thatta. It is recommended that the public private partnership model in Tuberculosis case detection needs to be taken on a larger scale so as to reduce the heavy TB burden in the country.

  14. One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion

    NASA Astrophysics Data System (ADS)

    Ghantous, M.; Babanin, A. V.

    2014-02-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  15. Effect of electrode positions on the mixing characteristics of an electroosmotic micromixer.

    PubMed

    Seo, H S; Kim, Y J

    2014-08-01

    In this study, an electrokinetic microchannel with a ring-type mixing chamber is introduced for fast mixing. The modeled micromixer that is used for the study of the electroosmotic effect takes two fluids from different inlets and combines them in a ring-type mixing chamber and, then, they are mixed by the electric fields at the electrodes. In order to compare the mixing performance in the modeled micromixer, we numerically investigated the flow characteristics with different positions of the electrodes in the mixing chamber using the commercial code, COMSOL. In addition, we discussed the concentration distributions of the dissolved substances in the flow fields and compared the mixing efficiency in the modeled micromixer with different electrode positions and operating conditions, such as the frequencies and electric potentials at the electrodes.

  16. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  17. Influence of non-homogeneous mixing on final epidemic size in a meta-population model.

    PubMed

    Cui, Jingan; Zhang, Yanan; Feng, Zhilan

    2018-06-18

    In meta-population models for infectious diseases, the basic reproduction number [Formula: see text] can be as much as 70% larger in the case of preferential mixing than that in homogeneous mixing [J.W. Glasser, Z. Feng, S.B. Omer, P.J. Smith, and L.E. Rodewald, The effect of heterogeneity in uptake of the measles, mumps, and rubella vaccine on the potential for outbreaks of measles: A modelling study, Lancet ID 16 (2016), pp. 599-605. doi: 10.1016/S1473-3099(16)00004-9 ]. This suggests that realistic mixing can be an important factor to consider in order for the models to provide a reliable assessment of intervention strategies. The influence of mixing is more significant when the population is highly heterogeneous. In this paper, another quantity, the final epidemic size ([Formula: see text]) of an outbreak, is considered to examine the influence of mixing and population heterogeneity. Final size relation is derived for a meta-population model accounting for a general mixing. The results show that [Formula: see text] can be influenced by the pattern of mixing in a significant way. Another interesting finding is that, heterogeneity in various sub-population characteristics may have the opposite effect on [Formula: see text] and [Formula: see text].

  18. Logit-normal mixed model for Indian Monsoon rainfall extremes

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-03-01

    Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.

  19. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model

  20. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable

  1. Fermion masses and mixing in general warped extra dimensional models

    NASA Astrophysics Data System (ADS)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  2. Parameterization of large-scale turbulent diffusion in the presence of both well-mixed and weakly mixed patchy layers

    NASA Astrophysics Data System (ADS)

    Osman, M. K.; Hocking, W. K.; Tarasick, D. W.

    2016-06-01

    Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with

  3. Effects of polymer additives on Rayleigh-Taylor turbulence.

    PubMed

    Boffetta, G; Mazzino, A; Musacchio, S

    2011-05-01

    The role of polymer additives on the turbulent convective flow of a Rayleigh-Taylor system is investigated by means of direct numerical simulations of Oldroyd-B viscoelastic model. The dynamics of polymer elongations follows adiabatically the self-similar evolution of the turbulent mixing layer and shows the appearance of a strong feedback on the flow which originates a cutoff for polymer elongations. The viscoelastic effects on the mixing properties of the flow are twofold. Mixing is appreciably enhanced at large scales (the mixing layer growth rate is larger than that of the purely Newtonian case) and depleted at small scales (thermal plumes are more coherent with respect to the Newtonian case). The observed speed up of the thermal plumes, together with an increase of the correlations between temperature field and vertical velocity, contributes to a significant enhancement of heat transport. Our findings are consistent with a scenario of drag reduction induced by polymers. A weakly nonlinear model proposed by Fermi for the growth of the mixing layer is reported in the Appendix. © 2011 American Physical Society

  4. Mixing with applications to inertial-confinement-fusion implosions

    NASA Astrophysics Data System (ADS)

    Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  5. Mixing with applications to inertial-confinement-fusion implosions.

    PubMed

    Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  6. A refined and dynamic cellular automaton model for pedestrian-vehicle mixed traffic flow

    NASA Astrophysics Data System (ADS)

    Liu, Mianfang; Xiong, Shengwu

    2016-12-01

    Mixed traffic flow sharing the “same lane” and having no discipline on road is a common phenomenon in the developing countries. For example, motorized vehicles (m-vehicles) and nonmotorized vehicles (nm-vehicles) may share the m-vehicle lane or nm-vehicle lane and pedestrians may share the nm-vehicle lane. Simulating pedestrian-vehicle mixed traffic flow consisting of three kinds of traffic objects: m-vehicles, nm-vehicles and pedestrians, can be a challenge because there are some erratic drivers or pedestrians who fail to follow the lane disciplines. In the paper, we investigate various moving and interactive behavior associated with mixed traffic flow, such as lateral drift including illegal lane-changing and transverse crossing different lanes, overtaking and forward movement, and propose some new moving and interactive rules for pedestrian-vehicle mixed traffic flow based on a refined and dynamic cellular automaton (CA) model. Simulation results indicate that the proposed model can be used to investigate the traffic flow characteristic in a mixed traffic flow system and corresponding complicated traffic problems, such as, the moving characteristics of different traffic objects, interaction phenomenon between different traffic objects, traffic jam, traffic conflict, etc., which are consistent with the actual mixed traffic system. Therefore, the proposed model provides a solid foundation for the management, planning and evacuation of the mixed traffic flow.

  7. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  8. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  9. Modeling the purging of dense fluid from a street canyon driven by an interfacial mixing flow and skimming flow

    NASA Astrophysics Data System (ADS)

    Baratian-Ghorghi, Z.; Kaye, N. B.

    2013-07-01

    An experimental study is presented to investigate the mechanism of flushing a trapped dense contaminant from a canyon by turbulent boundary layer flow. The results of a series of steady-state experiments are used to parameterize the flushing mechanisms. The steady-state experimental results for a canyon with aspect ratio one indicate that dense fluid is removed from the canyon by two different processes, skimming of dense fluid from the top of the dense layer; and by an interfacial mixing flow that mixes fresh fluid down into the dense lower layer (entrainment) while mixing dense fluid into the flow above the canyon (detrainment). A model is developed for the time varying buoyancy profile within the canyon as a function of the Richardson number which parameterizes both the interfacial mixing and skimming processes observed. The continuous release steady-state experiments allowed for the direct measurement of the skimming and interfacial mixing flow rates for any layer depth and Richardson number. Both the skimming rate and the interfacial mixing rate were found to be power-law functions of the Richardson number of the layer. The model results were compared to the results of previously published finite release experiments [Z. Baratian-Ghorghi and N. B. Kaye, Atmos. Environ. 60, 392-402 (2012)], 10.1016/j.atmosenv.2012.06.077. A high degree of consistency was found between the finite release data and the continuous release data. This agreement acts as an excellent check on the measurement techniques used, as the finite release data was based on curve fitting through buoyancy versus time data, while the continuous release data was calculated directly by measuring the rate of addition of volume and buoyancy once a steady-state was established. Finally, a system of ordinary differential equations is presented to model the removal of dense fluid from the canyon based on empirical correlations of the skimming and interfacial mixing taken form the steady-state experiments

  10. Crowding-Induced Mixing Behavior of Lipid Bilayers: Examination of Mixing Energy, Phase, Packing Geometry, and Reversibility.

    PubMed

    Zeno, Wade F; Rystov, Alice; Sasaki, Darryl Y; Risbud, Subhash H; Longo, Marjorie L

    2016-05-10

    In an effort to develop a general thermodynamic model from first-principles to describe the mixing behavior of lipid membranes, we examined lipid mixing induced by targeted binding of small (Green Fluorescent Protein (GFP)) and large (nanolipoprotein particles (NLPs)) structures to specific phases of phase-separated lipid bilayers. Phases were targeted by incorporation of phase-partitioning iminodiacetic acid (IDA)-functionalized lipids into ternary lipid mixtures consisting of DPPC, DOPC, and cholesterol. GFP and NLPs, containing histidine tags, bound the IDA portion of these lipids via a metal, Cu(2+), chelating mechanism. In giant unilamellar vesicles (GUVs), GFP and NLPs bound to the Lo domains of bilayers containing DPIDA, and bound to the Ld region of bilayers containing DOIDA. At sufficiently large concentrations of DPIDA or DOIDA, lipid mixing was induced by bound GFP and NLPs. The validity of the thermodynamic model was confirmed when it was found that the statistical mixing distribution as a function of crowding energy for smaller GFP and larger NLPs collapsed to the same trend line for each GUV composition. Moreover, results of this analysis show that the free energy of mixing for a ternary lipid bilayer consisting of DOPC, DPPC, and cholesterol varied from 7.9 × 10(-22) to 1.5 × 10(-20) J/lipid at the compositions observed, decreasing as the relative cholesterol concentration was increased. It was discovered that there appears to be a maximum packing density, and associated maximum crowding pressure, of the NLPs, suggestive of circular packing. A similarity in mixing induced by NLP1 and NLP3 despite large difference in projected areas was analytically consistent with monovalent (one histidine tag) versus divalent (two histidine tags) surface interactions, respectively. In addition to GUVs, binding and induced mixing behavior of NLPs was also observed on planar, supported lipid multibilayers. The mixing process was reversible, with Lo domains

  11. Crowding-induced mixing behavior of lipid bilayers: Examination of mixing energy, phase, packing geometry, and reversibility

    DOE PAGES

    Zeno, Wade F.; Rystov, Alice; Sasaki, Darryl Y.; ...

    2016-04-20

    In an effort to develop a general thermodynamic model from first-principles to describe the mixing behavior of lipid membranes, we examined lipid mixing induced by targeted binding of small (Green Fluorescent Protein (GFP)) and large (nanolipoprotein particles (NLPs)) structures to specific phases of phase-separated lipid bilayers. Phases were targeted by incorporation of phase-partitioning iminodiacetic acid (IDA)-functionalized lipids into ternary lipid mixtures consisting of DPPC, DOPC, and cholesterol. GFP and NLPs, containing histidine tags, bound the IDA portion of these lipids via a metal, Cu 2+, chelating mechanism. In giant unilamellar vesicles (GUVs), GFP and NLPs bound to the Lo domainsmore » of bilayers containing DPIDA, and bound to the Ld region of bilayers containing DOIDA. At sufficiently large concentrations of DPIDA or DOIDA, lipid mixing was induced by bound GFP and NLPs. The validity of the thermodynamic model was confirmed when it was found that the statistical mixing distribution as a function of crowding energy for smaller GFP and larger NLPs collapsed to the same trend line for each GUV composition. Moreover, results of this analysis show that the free energy of mixing for a ternary lipid bilayer consisting of DOPC, DPPC, and cholesterol varied from 7.9 × 10 –22 to 1.5 × 10 –20 J/lipid at the compositions observed, decreasing as the relative cholesterol concentration was increased. It was discovered that there appears to be a maximum packing density, and associated maximum crowding pressure, of the NLPs, suggestive of circular packing. A similarity in mixing induced by NLP1 and NLP3 despite large difference in projected areas was analytically consistent with monovalent (one histidine tag) versus divalent (two histidine tags) surface interactions, respectively. In addition to GUVs, binding and induced mixing behavior of NLPs was also observed on planar, supported lipid multibilayers. Furthermore, the mixing process was reversible

  12. Crowding-induced mixing behavior of lipid bilayers: Examination of mixing energy, phase, packing geometry, and reversibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeno, Wade F.; Rystov, Alice; Sasaki, Darryl Y.

    In an effort to develop a general thermodynamic model from first-principles to describe the mixing behavior of lipid membranes, we examined lipid mixing induced by targeted binding of small (Green Fluorescent Protein (GFP)) and large (nanolipoprotein particles (NLPs)) structures to specific phases of phase-separated lipid bilayers. Phases were targeted by incorporation of phase-partitioning iminodiacetic acid (IDA)-functionalized lipids into ternary lipid mixtures consisting of DPPC, DOPC, and cholesterol. GFP and NLPs, containing histidine tags, bound the IDA portion of these lipids via a metal, Cu 2+, chelating mechanism. In giant unilamellar vesicles (GUVs), GFP and NLPs bound to the Lo domainsmore » of bilayers containing DPIDA, and bound to the Ld region of bilayers containing DOIDA. At sufficiently large concentrations of DPIDA or DOIDA, lipid mixing was induced by bound GFP and NLPs. The validity of the thermodynamic model was confirmed when it was found that the statistical mixing distribution as a function of crowding energy for smaller GFP and larger NLPs collapsed to the same trend line for each GUV composition. Moreover, results of this analysis show that the free energy of mixing for a ternary lipid bilayer consisting of DOPC, DPPC, and cholesterol varied from 7.9 × 10 –22 to 1.5 × 10 –20 J/lipid at the compositions observed, decreasing as the relative cholesterol concentration was increased. It was discovered that there appears to be a maximum packing density, and associated maximum crowding pressure, of the NLPs, suggestive of circular packing. A similarity in mixing induced by NLP1 and NLP3 despite large difference in projected areas was analytically consistent with monovalent (one histidine tag) versus divalent (two histidine tags) surface interactions, respectively. In addition to GUVs, binding and induced mixing behavior of NLPs was also observed on planar, supported lipid multibilayers. Furthermore, the mixing process was reversible

  13. Multifractal Modeling of Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Samiee, Mehdi; Zayernouri, Mohsen; Meerschaert, Mark M.

    2017-11-01

    Stochastic processes in random media are emerging as interesting tools for modeling anomalous transport phenomena. Applications include intermittent passive scalar transport with background noise in turbulent flows, which are observed in atmospheric boundary layers, turbulent mixing in reactive flows, and long-range dependent flow fields in disordered/fractal environments. In this work, we propose a nonlocal scalar transport equation involving the fractional Laplacian, where the corresponding fractional index is linked to the multifractal structure of the nonlinear passive scalar power spectrum. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  14. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    NASA Astrophysics Data System (ADS)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.

    2017-02-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]-[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]-[Fe/H] unlike the observed bimodality (separate high-α and low-α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]-[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α-elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  15. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth

  16. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data.

    PubMed

    Tom, Brian Dm; Su, Li; Farewell, Vernon T

    2016-10-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.

  17. A corrected formulation for marginal inference derived from two-part mixed models for longitudinal semi-continuous data

    PubMed Central

    Su, Li; Farewell, Vernon T

    2013-01-01

    For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470

  18. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Mixed convective peristaltic flow of carbon nanotubes submerged in water using different thermal conductivity models.

    PubMed

    Hayat, T; Ahmed, Bilal; Abbasi, F M; Ahmad, B

    2016-10-01

    Single Walled Carbon Nanotubes (SWCNTs) are the advanced product of nanotechnology having notable mechanical and physical properties. Peristalsis of SWCNTs suspended in water through an asymmetric channel is examined. Such mechanism is studied in the presence of viscous dissipation, velocity slip, mixed convection, temperature jump and heat generation/absorption. Mathematical modeling is carried out under the low Reynolds number and long wavelength approximation. Resulting nonlinear system is solved using the perturbation technique for small Brinkman's number. Physical analysis and comparison of the results in light of three different thermal conductivity models is also provided. It is reported that the heat transfer rate at the boundary increases with an increase in the nanotubes volume fraction. The addition of nanotubes affects the pressure gradient during the peristaltic flow. Moreover, the maximum velocity of the fluid decreases due to addition of the nanotubes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Using generalized additive mixed models to assess spatial, temporal, and hydrologic controls on bacteria and nitrate in a vulnerable agricultural aquifer.

    PubMed

    Mellor, Andrea F P; Cey, Edwin E

    2015-11-01

    The Abbotsford-Sumas aquifer (ASA) has a history of nitrate contamination from agricultural land use and manure application to soils, yet little is known about its microbial groundwater quality. The goal of this study was to investigate the spatiotemporal distribution of pathogen indicators (Escherichia coli [E. coli] and total coliform [TC]) and nitrate in groundwater, and their potential relation to hydrologic drivers. Sampling of 46 wells over an 11-month period confirmed elevated nitrate concentrations, with more than 50% of samples exceeding 10 mg-N/L. E. coli detections in groundwater were infrequent (4 of 385 total samples) and attributed mainly to surface water-groundwater connections along Fishtrap Creek, which tested positive for E. coli in every sampling event. TC was detected frequently in groundwater (70% of samples) across the ASA. Generalized additive mixed models (GAMMs) yielded valuable insights into relationships between TC or nitrate and a range of spatial, temporal, and hydrologic explanatory variables. Increased TC values over the wetter fall and winter period were most strongly related to groundwater temperatures and levels, while precipitation and well location were weaker (but still significant) predictors. In contrast, the moderate temporal variability in nitrate concentrations was not significantly related to hydrologic forcings. TC was relatively widespread across the ASA and spatial patterns could not be attributed solely to surface water connectivity. Varying nitrate concentrations across the ASA were significantly related to both well location and depth, likely due to spatially variable nitrogen loading and localized geochemical attenuation (i.e., denitrification). Vulnerability of the ASA to bacteria was clearly linked to hydrologic conditions, and was distinct from nitrate, such that a groundwater management strategy specifically for bacterial contaminants is warranted. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Data on copula modeling of mixed discrete and continuous neural time series.

    PubMed

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  2. Lagrangian Mixing in an Axisymmetric Hurricane Model

    DTIC Science & Technology

    2010-07-23

    The MMR r is found by tak - ing the log of the time-series 6ρ(t)−A1, where A1 is 90% of the minimum value of6ρ(t), and the slope of the linear func...Advective mixing in a nondivergent barotropic hurricane model, Atmos. Chem. Phys., 10, 475 –497, doi:10.5194/acp-10- 475 -2010, 2010. Salman, H., Ide, K

  3. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    PubMed Central

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  4. Mixed-phase cloud physics and Southern Ocean cloud feedback in climate models

    DOE PAGES

    McCoy, Daniel T.; Hartmann, Dennis L.; Zelinka, Mark D.; ...

    2015-08-21

    Increasing optical depth poleward of 45° is a robust response to warming in global climate models. Much of this cloud optical depth increase has been hypothesized to be due to transitions from ice-dominated to liquid-dominated mixed-phase cloud. In this study, the importance of liquid-ice partitioning for the optical depth feedback is quantified for 19 Coupled Model Intercomparison Project Phase 5 models. All models show a monotonic partitioning of ice and liquid as a function of temperature, but the temperature at which ice and liquid are equally mixed (the glaciation temperature) varies by as much as 40 K across models. Modelsmore » that have a higher glaciation temperature are found to have a smaller climatological liquid water path (LWP) and condensed water path and experience a larger increase in LWP as the climate warms. The ice-liquid partitioning curve of each model may be used to calculate the response of LWP to warming. It is found that the repartitioning between ice and liquid in a warming climate contributes at least 20% to 80% of the increase in LWP as the climate warms, depending on model. Intermodel differences in the climatological partitioning between ice and liquid are estimated to contribute at least 20% to the intermodel spread in the high-latitude LWP response in the mixed-phase region poleward of 45°S. As a result, it is hypothesized that a more thorough evaluation and constraint of global climate model mixed-phase cloud parameterizations and validation of the total condensate and ice-liquid apportionment against observations will yield a substantial reduction in model uncertainty in the high-latitude cloud response to warming.« less

  5. Rayleigh-Taylor and Richtmyer-Meshkov instability induced flow, turbulence, and mixing. II

    NASA Astrophysics Data System (ADS)

    Zhou, Ye

    2017-12-01

    Rayleigh-Taylor (RT) and Richtmyer-Meshkov(RM) instabilities are well-known pathways towards turbulent mixing layers, in many cases characterized by significant mass and species exchange across the mixing layers (Zhou, 2017. Physics Reports, 720-722, 1-136). Mathematically, the pathway to turbulent mixing requires that the initial interface be multimodal, to permit cross-mode coupling leading to turbulence. Practically speaking, it is difficult to experimentally produce a non-multi-mode initial interface. Numerous methods and approaches have been developed to describe the late, multimodal, turbulent stages of RT and RM mixing layers. This paper first presents the initial condition dependence of RT mixing layers, and introduces parameters that are used to evaluate the level of "mixedness" and "mixed mass" within the layers, as well as the dependence on density differences, as well as the characteristic anisotropy of this acceleration-driven flow, emphasizing some of the key differences between the two-dimensional and three-dimensional RT mixing layers. Next, the RM mixing layers are discussed, and differences with the RT mixing layer are elucidated, including the RM mixing layers dependence on the Mach number of the initiating shock. Another key feature of the RM induced flows is its response to a reshock event, as frequently seen in shock-tube experiments as well as inertial confinement events. A number of approaches to modeling the evolution of these mixing layers are then described, in order of increasing complexity. These include simple buoyancy-drag models, Reynolds-averaged Navier-Stokes models of increased complexity, including K- ε, K-L, and K- L- a models, up to full Reynolds-stress models with more than one length-scale. Multifield models and multiphase models have also been implemented. Additional complexities to these flows are examined as well as modifications to the models to understand the effects of these complexities. These complexities include the

  6. Dynamic Infinite Mixed-Membership Stochastic Blockmodel.

    PubMed

    Fan, Xuhui; Cao, Longbing; Xu, Richard Yi Da

    2015-09-01

    Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.

  7. Mixing in the Extratropical Stratosphere: Model-measurements Comparisons using MLM Diagnostics

    NASA Technical Reports Server (NTRS)

    Ma, Jun; Waugh, Darryn W.; Douglass, Anne R.; Kawa, Stephan R.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    We evaluate transport processes in the extratropical lower stratosphere for both models and measurements with the help of equivalent length diagnostic from the modified Lagrangian-mean (MLM) analysis. This diagnostic is used to compare measurements of long-lived tracers made by the Cryogenic Limb Array Etalon Spectrometer (CLAES) on the Upper Atmosphere Research Satellite (UARS) with simulated tracers. Simulations are produced in Chemical and Transport Models (CTMs), in which meteorological fields are taken from the Goddard Earth Observing System Data Assimilation System (GEOS DAS), the Middle Atmosphere Community Climate Model (MACCM2), and the Geophysical Fluid Dynamics Laboratory (GFDL) "SKYHI" model, respectively. Time series of isentropic equivalent length show that these models are able to capture major mixing and transport properties observed by CLAES, such as the formation and destruction of polar barriers, the presence of surf zones in both hemispheres. Differences between each model simulation and the observation are examined in light of model performance. Among these differences, only the simulation driven by GEOS DAS shows one case of the "top-down" destruction of the Antarctic polar vortex, as observed in the CLAES data. Additional experiments of isentropic advection of artificial tracer by GEOS DAS winds suggest that diabatic movement might have considerable contribution to the equivalent length field in the 3D CTM diagnostics.

  8. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. A novel iterative mixed model to remap three complex orthopedic traits in dogs

    PubMed Central

    Huang, Meng; Hayward, Jessica J.; Corey, Elizabeth; Garrison, Susan J.; Wagner, Gabriela R.; Krotscheck, Ursula; Hayashi, Kei; Schweitzer, Peter A.; Lust, George; Boyko, Adam R.; Todhunter, Rory J.

    2017-01-01

    Hip dysplasia (HD), elbow dysplasia (ED), and rupture of the cranial (anterior) cruciate ligament (RCCL) are the most common complex orthopedic traits of dogs and all result in debilitating osteoarthritis. We reanalyzed previously reported data: the Norberg angle (a quantitative measure of HD) in 921 dogs, ED in 113 cases and 633 controls, and RCCL in 271 cases and 399 controls and their genotypes at ~185,000 single nucleotide polymorphisms. A novel fixed and random model with a circulating probability unification (FarmCPU) function, with marker-based principal components and a kinship matrix to correct for population stratification, was used. A Bonferroni correction at p<0.01 resulted in a P< 6.96 ×10−8. Six loci were identified; three for HD and three for RCCL. An associated locus at CFA28:34,369,342 for HD was described previously in the same dogs using a conventional mixed model. No loci were identified for RCCL in the previous report but the two loci for ED in the previous report did not reach genome-wide significance using the FarmCPU model. These results were supported by simulation which demonstrated that the FarmCPU held no power advantage over the linear mixed model for the ED sample but provided additional power for the HD and RCCL samples. Candidate genes for HD and RCCL are discussed. When using FarmCPU software, we recommend a resampling test, that a positive control be used to determine the optimum pseudo quantitative trait nucleotide-based covariate structure of the model, and a negative control be used consisting of permutation testing and the identical resampling test as for the non-permuted phenotypes. PMID:28614352

  10. Numerical Study of Mixing Thermal Conductivity Models for Nanofluid Heat Transfer Enhancement

    NASA Astrophysics Data System (ADS)

    Pramuanjaroenkij, A.; Tongkratoke, A.; Kakaç, S.

    2018-01-01

    Researchers have paid attention to nanofluid applications, since nanofluids have revealed their potentials as working fluids in many thermal systems. Numerical studies of convective heat transfer in nanofluids can be based on considering them as single- and two-phase fluids. This work is focused on improving the single-phase nanofluid model performance, since the employment of this model requires less calculation time and it is less complicated due to utilizing the mixing thermal conductivity model, which combines static and dynamic parts used in the simulation domain alternately. The in-house numerical program has been developed to analyze the effects of the grid nodes, effective viscosity model, boundary-layer thickness, and of the mixing thermal conductivity model on the nanofluid heat transfer enhancement. CuO-water, Al2O3-water, and Cu-water nanofluids are chosen, and their laminar fully developed flows through a rectangular channel are considered. The influence of the effective viscosity model on the nanofluid heat transfer enhancement is estimated through the average differences between the numerical and experimental results for the nanofluids mentioned. The nanofluid heat transfer enhancement results show that the mixing thermal conductivity model consisting of the Maxwell model as the static part and the Yu and Choi model as the dynamic part, being applied to all three nanofluids, brings the numerical results closer to the experimental ones. The average differences between those results for CuO-water, Al2O3-water, and CuO-water nanofluid flows are 3.25, 2.74, and 3.02%, respectively. The mixing thermal conductivity model has been proved to increase the accuracy of the single-phase nanofluid simulation and to reveal its potentials in the single-phase nanofluid numerical studies.

  11. Modeling Photodetachment from HO2- Using the pd Case of the Generalized Mixed Character Molecular Orbital Model

    NASA Astrophysics Data System (ADS)

    Blackstone, Christopher C.; Sanov, Andrei

    2016-06-01

    Using the generalized model for photodetachment of electrons from mixed-character molecular orbitals, we gain insight into the nature of the HOMO of HO2- by treating it as a coherent superpostion of one p- and one d-type atomic orbital. Fitting the pd model function to the ab initio calculated HOMO of HO2- yields a fractional d-character, γp, of 0.979. The modeled curve of the anisotropy parameter, β, as a function of electron kinetic energy for a pd-type mixed character orbital is matched to the experimental data.

  12. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  13. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracksmore » in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.« less

  14. Langmuir cells and mixing in the upper ocean

    NASA Astrophysics Data System (ADS)

    Carniel, S.; Sclavo, M.; Kantha, L. H.; Clayson, C. A.

    2005-01-01

    The presence of surface gravity waves at the ocean surface has two important effects on turbulence in the oceanic mixed layer (ML): the wave breaking and the Langmuir cells (LC). Both these effects act as additional sources of turbulent kinetic energy (TKE) in the oceanic ML, and hence are important to mixing in the upper ocean. The breaking of high wave-number components of the wind wave spectrum provides an intense but sporadic source of turbulence in the upper surface; turbulence thus injected diffuses downward, while decaying rapidly, modifying oceanic near-surface properties which in turn could affect the air-sea transfer of heat and dissolved gases. LC provide another source of additional turbulence in the water column; they are counter-rotating cells inside the ML, with their axes roughly aligned in the direction of the wind (Langmuir I., Science871938119). These structures are usually made evident by the presence of debris and foam in the convergence area of the cells, and are generated by the interaction of the wave-field-induced Stokes drift with the wind-induced shear stress. LC have long been thought to have a substantial influence on mixing in the upper ocean, but the difficulty in their parameterization have made ML modelers consistently ignore them in the past. However, recent Large Eddy Simulations (LES) studies suggest that it is possible to include their effect on mixing by simply adding additional production terms in the turbulence equations, thus enabling even 1D models to incorporate LC-driven turbulence. Since LC also modify the Coriolis terms in the mean momentum equations by the addition of a term involving the Stokes drift, their effect on the velocity structure in the ML is also quite significant and could have a major impact on the drift of objects and spilled oil in the upper ocean. In this paper we examine the effect of surface gravity waves on mixing in the upper ocean, focusing on Langmuir circulations, which is by far the dominant

  15. Study on system dynamics of evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  16. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  17. A big data approach to the development of mixed-effects models for seizure count data.

    PubMed

    Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M

    2017-05-01

    Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  18. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    ERIC Educational Resources Information Center

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  19. Effects of imperfect mixing on low-density polyethylene reactor dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, C.M.; Dihora, J.O.; Ray, W.H.

    1998-07-01

    Earlier work considered the effect of feed conditions and controller configuration on the runaway behavior of LDPE autoclave reactors assuming a perfectly mixed reactor. This study provides additional insight on the dynamics of such reactors by using an imperfectly mixed reactor model and bifurcation analysis to show the changes in the stability region when there is imperfect macroscale mixing. The presence of imperfect mixing substantially increases the range of stable operation of the reactor and makes the process much easier to control than for a perfectly mixed reactor. The results of model analysis and simulations are used to identify somemore » of the conditions that lead to unstable reactor behavior and to suggest ways to avoid reactor runaway or reactor extinction during grade transitions and other process operation disturbances.« less

  20. Development and validation of a turbulent-mix model for variable-density and compressible flows.

    PubMed

    Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J

    2010-10-01

    The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.

  1. Attribution of horizontal and vertical contributions to spurious mixing in an Arbitrary Lagrangian-Eulerian ocean model

    NASA Astrophysics Data System (ADS)

    Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair

    2017-11-01

    We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.

  2. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  3. Turbulent transport and mixing in transitional Rayleigh-Taylor unstable flow: A priori assessment of gradient-diffusion and similarity modeling

    NASA Astrophysics Data System (ADS)

    Schilling, Oleg; Mueschke, Nicholas J.

    2017-12-01

    largely adopted in two-equation Reynolds-averaged Navier-Stokes (RANS) models of Rayleigh-Taylor turbulent mixing. In addition, it is shown that the predictions of the Boussinesq model for the Reynolds stress agree better with the data when additional buoyancy-related terms are included. It is shown that an unsteady RANS paradigm is needed to predict the transitional flow dynamics from early evolution times, analogous to the small Reynolds number modifications in RANS models of wall-bounded flows in which the production-to-dissipation ratio is far from equilibrium. Although the present study is specific to one particular flow and one set of initial conditions, the methodology could be applied to calibrations of other Rayleigh-Taylor flows with different initial conditions (which may give different results during the early-time, transitional flow stages, and perhaps asymptotic stage). The implications of these findings for developing high-fidelity eddy viscosity-based turbulent transport and mixing models of Rayleigh-Taylor turbulence are discussed.

  4. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  5. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  6. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    PubMed

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  7. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  9. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...

  10. Evaluating targeted interventions via meta-population models with multi-level mixing.

    PubMed

    Feng, Zhilan; Hill, Andrew N; Curns, Aaron T; Glasser, John W

    2017-05-01

    Among the several means by which heterogeneity can be modeled, Levins' (1969) meta-population approach preserves the most analytical tractability, a virtue to the extent that generality is desirable. When model populations are stratified, contacts among their respective sub-populations must be described. Using a simple meta-population model, Feng et al. (2015) showed that mixing among sub-populations, as well as heterogeneity in characteristics affecting sub-population reproduction numbers, must be considered when evaluating public health interventions to prevent or control infectious disease outbreaks. They employed the convex combination of preferential within- and proportional among-group contacts first described by Nold (1980) and subsequently generalized by Jacquez et al. (1988). As the utility of meta-population modeling depends on more realistic mixing functions, the authors added preferential contacts between parents and children and among co-workers (Glasser et al., 2012). Here they further generalize this function by including preferential contacts between grandparents and grandchildren, but omit workplace contacts. They also describe a general multi-level mixing scheme, provide three two-level examples, and apply two of them. In their first application, the authors describe age- and gender-specific patterns in face-to-face conversations (Mossong et al., 2008), proxies for contacts by which respiratory pathogens might be transmitted, that are consistent with everyday experience. This suggests that meta-population models with inter-generational mixing could be employed to evaluate prolonged school-closures, a proposed pandemic mitigation measure that could expose grandparents, and other elderly surrogate caregivers for working parents, to infectious children. In their second application, the authors use a meta-population SEIR model stratified by 7 age groups and 50 states plus the District of Columbia, to compare actual with optimal vaccination during the

  11. Mixed layer modeling in the East Pacific warm pool during 2002

    NASA Astrophysics Data System (ADS)

    Van Roekel, Luke P.; Maloney, Eric D.

    2012-06-01

    Two vertical mixing models (the modified dynamic instability model of Price et al.; PWP, and K-Profile Parameterizaton; KPP) are used to analyze intraseasonal sea surface temperature (SST) variability in the northeast tropical Pacific near the Costa Rica Dome during boreal summer of 2002. Anomalies in surface latent heat flux and shortwave radiation are the root cause of the three intraseasonal SST oscillations of order 1°C amplitude that occur during this time, although surface stress variations have a significant impact on the third event. A slab ocean model that uses observed monthly varying mixed layer depths and accounts for penetrating shortwave radiation appears to well-simulate the first two SST oscillations, but not the third. The third oscillation is associated with small mixed layer depths (<5 m) forced by, and acting with, weak surface stresses and a stabilizing heat flux that cause a transient spike in SST of 2°C. Intraseasonal variations in freshwater flux due to precipitation and diurnal flux variability do not significantly impact these intraseasonal oscillations. These results suggest that a slab ocean coupled to an atmospheric general circulation model, as used in previous studies of east Pacific intraseasonal variability, may not be entirely adequate to realistically simulate SST variations. Further, while most of the results from the PWP and KPP models are similar, some important differences that emerge are discussed.

  12. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    EPA Science Inventory

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  13. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  14. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  15. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  16. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  17. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  18. Photonic states mixing beyond the plasmon hybridization model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryadharma, Radius N. S.; Iskandar, Alexander A., E-mail: iskandar@fi.itb.ac.id; Tjia, May-On

    2016-07-28

    A study is performed on a photonic-state mixing-pattern in an insulator-metal-insulator cylindrical silver nanoshell and its rich variations induced by changes in the geometry and dielectric media of the system, representing the combined influences of plasmon coupling strength and cavity effects. This study is performed in terms of the photonic local density of states (LDOS) calculated using the Green tensor method, in order to elucidate those combined effects. The energy profiles of LDOS inside the dielectric core are shown to exhibit consistently growing number of redshifted photonic states due to an enhanced plasmon coupling induced state mixing arising from decreasedmore » shell thickness, increased cavity size effect, and larger symmetry breaking effect induced by increased permittivity difference between the core and the background media. Further, an increase in cavity size leads to increased additional peaks that spread out toward the lower energy regime. A systematic analysis of those variations for a silver nanoshell with a fixed inner radius in vacuum background reveals a certain pattern of those growing number of redshifted states with an analytic expression for the corresponding energy downshifts, signifying a photonic state mixing scheme beyond the commonly adopted plasmon hybridization scheme. Finally, a remarkable correlation is demonstrated between the LDOS energy profiles outside the shell and the corresponding scattering efficiencies.« less

  19. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  20. Horizontal mixing coefficients for two-dimensional chemical models calculated from National Meteorological Center Data

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Schoeberl, M. R.; Plumb, R. A.

    1986-01-01

    Calculations of the two-dimensional, species-independent mixing coefficients for two-dimensional chemical models for the troposphere and stratosphere are performed using quasi-geostrophic potential vorticity fluxes and gradients from 4 years of National Meteorological Center data for the four seasons in both hemispheres. Results show that the horizontal mixing coefficient values for the winter lower stratosphere are broadly consistent with those currently employed in two-dimensional models, but the horizontal mixing coefficient values in the northern winter upper stratosphere are much larger than those usually used.

  1. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The

  2. Model studies of hydrogen atom addition and abstraction processes involving ortho-, meta-, and para-benzynes.

    PubMed

    Clark, A E; Davidson, E R

    2001-10-31

    H-atom addition and abstraction processes involving ortho-, meta-, and para-benzyne have been investigated by multiconfigurational self-consistent field methods. The H(A) + H(B)...H(C) reaction (where r(BC) is adjusted to mimic the appropriate singlet-triplet energy gap) is shown to effectively model H-atom addition to benzyne. The doublet multiconfiguration wave functions are shown to mix the "singlet" and "triplet" valence bond structures of H(B)...H(C) along the reaction coordinate; however, the extent of mixing is dependent on the singlet-triplet energy gap (DeltaE(ST)) of the H(B)...H(C) diradical. Early in the reaction, the ground-state wave function is essentially the "singlet" VB function, yet it gains significant "triplet" VB character along the reaction coordinate that allows H(A)-H(B) bond formation. Conversely, the wave function of the first excited state is predominantly the "triplet" VB configuration early in the reaction coordinate, but gains "singlet" VB character when the H-atom is close to a radical center. As a result, the potential energy surface (PES) for H-atom addition to triplet H(B)...H(C) diradical is repulsive! The H3 model predicts, in agreement with the actual calculations on benzyne, that the singlet diradical electrons are not coupled strongly enough to give rise to an activation barrier associated with C-H bond formation. Moreover, this model predicts that the PES for H-atom addition to triplet benzyne will be characterized by a repulsive curve early in the reaction coordinate, followed by a potential avoided crossing with the (pi)1(sigma*)1 state of the phenyl radical. In contrast to H-atom addition, large activation barriers characterize the abstraction process in both the singlet ground state and first triplet state. In the ground state, this barrier results from the weakly avoided crossing of the dominant VB configurations in the ground-state singlet (S0) and first excited singlet (S1) because of the large energy gap between S0

  3. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing

  4. Sensitivity of WallDYN material migration modeling to uncertainties in mixed-material surface binding energies

    DOE PAGES

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    2017-03-09

    The WallDYN package has recently been applied to a number of tokamaks to self-consistently model the evolution of mixed-material plasma facing surfaces. A key component of the WallDYN model is the concentration-dependent surface sputtering rate, calculated using SDTRIM.SP. This modeled sputtering rate is strongly influenced by the surface binding energies (SBEs) of the constituent materials, which are well known for pure elements but often are poorly constrained for mixed-materials. This work examines the sensitivity of WallDYN surface evolution calculations to different models for mixed-material SBEs, focusing on the carbon/lithium/oxygen/deuterium system present in NSTX. A realistic plasma background is reconstructed frommore » a high density, H-mode NSTX discharge, featuring an attached outer strike point with local density and temperature of 4 × 10 20 m -3 and 4 eV, respectively. It is found that various mixed-material SBE models lead to significant qualitative and quantitative changes in the surface evolution profile at the outer divertor, with the highest leverage parameter being the C-Li binding model. Uncertainties of order 50%, appearing on time scales relevant to tokamak experiments, highlight the importance of choosing an appropriate mixed-material sputtering representation when modeling the surface evolution of plasma facing components. Lastly, these results are generalized to other fusion-relevant materials with different ranges of SBEs.« less

  5. Sensitivity of WallDYN material migration modeling to uncertainties in mixed-material surface binding energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, J. H.; Jaworski, M. A.; Schmid, K.

    The WallDYN package has recently been applied to a number of tokamaks to self-consistently model the evolution of mixed-material plasma facing surfaces. A key component of the WallDYN model is the concentration-dependent surface sputtering rate, calculated using SDTRIM.SP. This modeled sputtering rate is strongly influenced by the surface binding energies (SBEs) of the constituent materials, which are well known for pure elements but often are poorly constrained for mixed-materials. This work examines the sensitivity of WallDYN surface evolution calculations to different models for mixed-material SBEs, focusing on the carbon/lithium/oxygen/deuterium system present in NSTX. A realistic plasma background is reconstructed frommore » a high density, H-mode NSTX discharge, featuring an attached outer strike point with local density and temperature of 4 × 10 20 m -3 and 4 eV, respectively. It is found that various mixed-material SBE models lead to significant qualitative and quantitative changes in the surface evolution profile at the outer divertor, with the highest leverage parameter being the C-Li binding model. Uncertainties of order 50%, appearing on time scales relevant to tokamak experiments, highlight the importance of choosing an appropriate mixed-material sputtering representation when modeling the surface evolution of plasma facing components. Lastly, these results are generalized to other fusion-relevant materials with different ranges of SBEs.« less

  6. [Lethal anaphylactic shock model induced by human mixed serum in guinea pigs].

    PubMed

    Ren, Guang-Mu; Bai, Ji-Wei; Gao, Cai-Rong

    2005-08-01

    To establish an anaphylactic shock model induced by human mixed serum in guinea pigs. Eighteen guinea pigs were divided into two groups: sensitized and control, The sensitized group were immunized intracutaneously with human mixed serum and then induced by endocardiac injection after 3 weeks. Symptoms of anaphylactic shock appeared in the sensitized group. The level of serum IgE were increased in the sensitized group significantly. An animal model of anaphylactic shock wer established successfully. It provide a tool for both forensic study and anaphylactic shock therapy.

  7. Nonlinear mixed modeling of basal area growth for shortleaf pine

    Treesearch

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2008-01-01

    Mixed model estimation methods were used to fit individual-tree basal area growth models to tree and stand-level measurements available from permanent plots established in naturally regenerated shortleaf pine (Pinus echinata Mill.) even-aged stands in western Arkansas and eastern Oklahoma in the USA. As a part of the development of a comprehensive...

  8. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  9. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  10. Assessing non-additive effects in GBLUP model.

    PubMed

    Vieira, I C; Dos Santos, J P R; Pires, L P M; Lima, B M; Gonçalves, F M A; Balestre, M

    2017-05-10

    Understanding non-additive effects in the expression of quantitative traits is very important in genotype selection, especially in species where the commercial products are clones or hybrids. The use of molecular markers has allowed the study of non-additive genetic effects on a genomic level, in addition to a better understanding of its importance in quantitative traits. Thus, the purpose of this study was to evaluate the behavior of the GBLUP model in different genetic models and relationship matrices and their influence on the estimates of genetic parameters. We used real data of the circumference at breast height in Eucalyptus spp and simulated data from a population of F 2 . Three commonly reported kinship structures in the literature were adopted. The simulation results showed that the inclusion of epistatic kinship improved prediction estimates of genomic breeding values. However, the non-additive effects were not accurately recovered. The Fisher information matrix for real dataset showed high collinearity in estimates of additive, dominant, and epistatic variance, causing no gain in the prediction of the unobserved data and convergence problems. Estimates presented differences of genetic parameters and correlations considering the different kinship structures. Our results show that the inclusion of non-additive effects can improve the predictive ability or even the prediction of additive effects. However, the high distortions observed in the variance estimates when the Hardy-Weinberg equilibrium assumption is violated due to the presence of selection or inbreeding can converge at zero gains in models that consider epistasis in genomic kinship.

  11. Intercomparison of cloud model simulations of Arctic mixed-phase boundary layer clouds observed during SHEBA/FIRE-ACE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, H.; Zuidema, Paquita; Ackerman, Andrew

    2011-06-16

    An intercomparison of six cloud-resolving and large-eddy simulation models is presented. This case study is based on observations of a persistent mixed-phase boundary layer cloud gathered on 7 May, 1998 from the Surface Heat Budget of Arctic Ocean (SHEBA) and First ISCCP Regional Experiment - Arctic Cloud Experiment (FIRE-ACE). Ice nucleation is constrained in the simulations in a way that holds the ice crystal concentration approximately fixed, with two sets of sensitivity runs in addition to the baseline simulations utilizing different specified ice nucleus (IN) concentrations. All of the baseline and sensitivity simulations group into two distinct quasi-steady states associatedmore » with either persistent mixed-phase clouds or all-ice clouds after the first few hours of integration, implying the existence of multiple equilibria. These two states are associated with distinctly different microphysical, thermodynamic, and radiative characteristics. Most but not all of the models produce a persistent mixed-phase cloud qualitatively similar to observations using the baseline IN/crystal concentration, while small increases in the IN/crystal concentration generally lead to rapid glaciation and conversion to the all-ice state. Budget analysis indicates that larger ice deposition rates associated with increased IN/crystal concentrations have a limited direct impact on dissipation of liquid in these simulations. However, the impact of increased ice deposition is greatly enhanced by several interaction pathways that lead to an increased surface precipitation flux, weaker cloud top radiative cooling and cloud dynamics, and reduced vertical mixing, promoting rapid glaciation of the mixed-phase cloud for deposition rates in the cloud layer greater than about 1-2x10-5 g kg-1 s-1. These results indicate the critical importance of precipitation-radiative-dynamical interactions in simulating cloud phase, which have been neglected in previous fixed-dynamical parcel studies of

  12. The additional effects of a probiotic mix on abdominal adiposity and antioxidant Status: A double-blind, randomized trial.

    PubMed

    Gomes, Aline Corado; de Sousa, Rávila Graziany Machado; Botelho, Patrícia Borges; Gomes, Tatyanne Letícia Nogueira; Prada, Patrícia Oliveira; Mota, João Felipe

    2017-01-01

    To investigate whether a probiotic mix has additional effects when compared with an isolated dietary intervention on the body composition, lipid profile, endotoxemia, inflammation, and antioxidant profile. Women who had excess weight or obesity were recruited to a randomized, double-blind trial and received a probiotic mix (Lactobacillus acidophilus and casei; Lactococcus lactis; Bifidobacterium bifidum and lactis; 2 × 10 10 colony-forming units/day) (n = 21) or placebo (n = 22) for 8 weeks. Both groups received a dietary prescription. Body composition was assessed by anthropometry and dual-energy X-ray absorptiometry. The lipid profile, lipid accumulation product, plasma fatty acids, lipopolysaccharide, interleukin-6, interleukin-10, tumor necrosis factor-α, adiponectin, and the antioxidant enzymes activities were analyzed. In comparison with the dietary intervention group, the dietary intervention + probiotic mix group showed a greater reduction in the waist circumference (-3.40% vs. -5.48%, P = 0.03), waist-height ratio (-3.27% vs. -5.00%, P = 0.02), conicity index (-2.43% vs. -4.09% P = 0.03), and plasma polyunsaturated fatty acids (5.65% vs. -18.63%, P = 0.04) and an increase in the activity of glutathione peroxidase (-16.67% vs. 15.62%, P < 0.01). Supplementation of a probiotic mix reduced abdominal adiposity and increased antioxidant enzyme activity in a more effective way than an isolated dietary intervention. © 2016 The Obesity Society.

  13. Taxonomy of Magma Mixing II: Thermochemistry of Mixed Crystal-Bearing Magmas Using the Magma Chamber Simulator

    NASA Astrophysics Data System (ADS)

    Bohrson, W. A.; Spera, F. J.; Neilson, R.; Ghiorso, M. S.

    2013-12-01

    Magma recharge and magma mixing contribute to the diversity of melt and crystal populations, the abundance and phase state of volatiles, and thermal and mass characteristics of crustal magma systems. The literature is replete with studies documenting mixing end-members and associated products, from mingled to hybridized, and a catalytic link between recharge/mixing and eruption is likely. Given its importance and the investment represented by thousands of detailed magma mixing studies, a multicomponent, multiphase magma mixing taxonomy is necessary to systematize the array of governing parameters (e.g., pressure (P), temperature (T), composition (X)) and attendant outcomes. While documenting the blending of two melts to form a third melt is straightforward, quantification of the mixing of two magmas and the subsequent evolution of hybrid magma requires application of an open-system thermodynamic model. The Magma Chamber Simulator (MCS) is a thermodynamic, energy, and mass constrained code that defines thermal, mass and compositional (major, trace element and isotope) characteristics of melt×minerals×fluid phase in a composite magma body-recharge magma-crustal wallrock system undergoing recharge (magma mixing), assimilation, and crystallization. In order to explore fully hybridized products, in MCS, energy and mass of recharge magma (R) are instantaneously delivered to resident magma (M), and M and R are chemically homogenized and thermally equilibrated. The hybrid product achieves a new equilibrium state, which may include crystal resorption or precipitation and/or evolution of a fluid phase. Hundreds of simulations systematize the roles that PTX (and hence mineral identity and abundance) and the mixing ratio (mass of M/mass of R) have in producing mixed products. Combinations of these parameters define regime diagrams that illustrate possible outcomes, including: (1) Mixed melt composition is not necessarily a mass weighted mixture of M and R magmas because

  14. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. On the TAP Free Energy in the Mixed p-Spin Models

    NASA Astrophysics Data System (ADS)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2018-05-01

    Thouless et al. (Phys Mag 35(3):593-601, 1977), derived a representation for the free energy of the Sherrington-Kirkpatrick model, called the TAP free energy, written as the difference of the energy and entropy on the extended configuration space of local magnetizations with an Onsager correction term. In the setting of mixed p-spin models with Ising spins, we prove that the free energy can indeed be written as the supremum of the TAP free energy over the space of local magnetizations whose Edwards-Anderson order parameter (self-overlap) is to the right of the support of the Parisi measure. Furthermore, for generic mixed p-spin models, we prove that the free energy is equal to the TAP free energy evaluated on the local magnetization of any pure state.

  16. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  17. Use of a mixing model to investigate groundwater-surface water mixing and nitrogen biogeochemistry in the bed of a groundwater-fed river

    NASA Astrophysics Data System (ADS)

    Lansdown, Katrina; Heppell, Kate; Ullah, Sami; Heathwaite, A. Louise; Trimmer, Mark; Binley, Andrew; Heaton, Tim; Zhang, Hao

    2010-05-01

    The dynamics of groundwater and surface water mixing and associated nitrogen transformations in the hyporheic zone have been investigated within a gaining reach of a groundwater-fed river (River Leith, Cumbria, UK). The regional aquifer consists of Permo-Triassic sandstone, which is overlain by varying depths of glaciofluvial sediments (~15 to 50 cm) to form the river bed. The reach investigated (~250m long) consists of a series of riffle and pool sequences (Käser et al. 2009), with other geomorphic features such as vegetated islands and marginal bars also present. A network of 17 piezometers, each with six depth-distributed pore water samplers based on the design of Rivett et al. (2008), was installed in the river bed in June 2009. An additional 18 piezometers with a single pore water sampler were installed in the riparian zone along the study reach. Water samples were collected from the pore water samplers on three occasions during summer 2009, a period of low flow. The zone of groundwater-surface water mixing within the river bed sediments was inferred from depth profiles (0 to 100 cm) of conservative chemical species and isotopes of water with the collected samples. Sediment cores collected during piezometer installation also enabled characterisation of grain size within the hyporheic zone. A multi-component mixing model was developed to quantify the relative contributions of different water sources (surface water, groundwater and bank exfiltration) to the hyporheic zone. Depth profiles of ‘predicted' nitrate concentration were constructed using the relative contribution of each water source to the hyporheic and the nitrate concentration of the end members. This approach assumes that the mixing of different sources of water is the only factor controlling the nitrate concentration of pore water in the river bed sediments. Comparison of predicted nitrate concentrations (which assume only mixing of waters with different nitrate concentrations) with actual

  18. Molybdenum-based additives to mixed-metal oxides for use in hot gas cleanup sorbents for the catalytic decomposition of ammonia in coal gases

    DOEpatents

    Ayala, Raul E.

    1993-01-01

    This invention relates to additives to mixed-metal oxides that act simultaneously as sorbents and catalysts in cleanup systems for hot coal gases. Such additives of this type, generally, act as a sorbent to remove sulfur from the coal gases while substantially simultaneously, catalytically decomposing appreciable amounts of ammonia from the coal gases.

  19. Cobimaximal lepton mixing from soft symmetry breaking

    NASA Astrophysics Data System (ADS)

    Grimus, W.; Lavoura, L.

    2017-11-01

    Cobimaximal lepton mixing, i.e.θ23 = 45 ° and δ = ± 90 ° in the lepton mixing matrix V, arises as a consequence of SV =V* P, where S is the permutation matrix that interchanges the second and third rows of V and P is a diagonal matrix of phase factors. We prove that any such V may be written in the form V = URP, where U is any predefined unitary matrix satisfying SU =U*, R is an orthogonal, i.e. real, matrix, and P is a diagonal matrix satisfying P2 = P. Using this theorem, we demonstrate the equivalence of two ways of constructing models for cobimaximal mixing-one way that uses a standard CP symmetry and a different way that uses a CP symmetry including μ-τ interchange. We also present two simple seesaw models to illustrate this equivalence; those models have, in addition to the CP symmetry, flavour symmetries broken softly by the Majorana mass terms of the right-handed neutrino singlets. Since each of the two models needs four scalar doublets, we investigate how to accommodate the Standard Model Higgs particle in them.

  20. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    PubMed

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  1. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    NASA Astrophysics Data System (ADS)

    Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades

    2010-04-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  2. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    USGS Publications Warehouse

    Solomon, D. Kip; Genereux, David P.; Plummer, Niel; Busenberg, Eurybiades

    2010-01-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC‐11, CFC‐12, CFC‐113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near‐zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  3. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  4. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS

  5. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  6. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  7. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    NASA Astrophysics Data System (ADS)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  8. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  9. New theory of stellar convection without the mixing-length parameter: new stellar atmosphere model

    NASA Astrophysics Data System (ADS)

    Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.

    2018-01-01

    Stellar convection is usually described by the mixing-length theory, which makes use of the mixing-length scale factor to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is proportional to the local pressure scale height of the star, and the proportionality factor (i.e. mixing-length parameter) is determined by comparing the stellar models to some calibrator, i.e. the Sun. No strong arguments exist to suggest that the mixing-length parameter is the same in all stars and all evolutionary phases and because of this, all stellar models in the literature are hampered by this basic uncertainty. In a recent paper [1] we presented a new theory that does not require the mixing length parameter. Our self-consistent analytical formulation of stellar convection determines all the properties of stellar convection as a function of the physical behavior of the convective elements themselves and the surrounding medium. The new theory of stellar convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations expressed in a non-inertial reference frame co-moving with the convective elements. The motion of stellar convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time-dependent formalism. The predictions of the new theory are compared with those from the standard mixing-length paradigm with positive results for atmosphere models of the Sun and all the stars in the Hertzsprung-Russell diagram.

  10. Cross-reactions between xanthates and rubber additives.

    PubMed

    Sasseville, Denis; Al-Sowaidi, Mowza; Moreau, Linda

    2007-09-01

    We previously described allergic contact dermatitis from xanthates used in the recovery of metals from mining ores. We observed cross-reactions with carbamates, believed to be due to the common "dithio" nucleus shared by both groups. The present study was undertaken to establish the rate of cross-reactions between xanthates and rubber additives. Between November 2002 and December 2005, 1,220 consecutive patients were patch-tested with sodium isopropyl xanthate 10% in petrolatum (pet) and with potassium amyl xanthate 10% pet and later 5% pet, in addition to the North American Contact Dermatitis Group standard series and other series as required by their conditions. Fifty-one patients reacted to xanthates, carbamates, or thiurams; 26 reacted to xanthates only, and these reactions were felt to be irritant. Twenty-five patients reacted to xanthates and/or to one or more of the rubber additives, 12 had positive reactions to xanthates and to either carba mix or thiuram mix, 10 reacted to xanthates and carba mix, 9 reacted to xanthates and thiuram mix, and 8 showed positive reactions to xanthates and both mixes. However, 13 patients had positive reactions to carba mix and thiuram mix but did not react to xanthates. Six patients reacted to other rubber additives such as mercaptobenzothiazole, black rubber mix, and mixed dialkyl thioureas. Five of these patients also reacted to xanthates, 4 reacted to xanthates and carba mix, and 3 reacted to xanthates, carba mix, and thiuram mix. Of patients sensitized to carbamates, thiurams, or mercaptobenzothiazole, 50% exhibit cross-reactions with xanthates. Xanthates are irritants, and their patch-test concentrations should be lowered to 5% or less.

  11. The influence of environmental variables on the presence of white sharks, Carcharodon carcharias at two popular Cape Town bathing beaches: a generalized additive mixed model.

    PubMed

    Weltz, Kay; Kock, Alison A; Winker, Henning; Attwood, Colin; Sikweyiya, Monwabisi

    2013-01-01

    Shark attacks on humans are high profile events which can significantly influence policies related to the coastal zone. A shark warning system in South Africa, Shark Spotters, recorded 378 white shark (Carcharodon carcharias) sightings at two popular beaches, Fish Hoek and Muizenberg, during 3690 six-hour long spotting shifts, during the months September to May 2006 to 2011. The probabilities of shark sightings were related to environmental variables using Binomial Generalized Additive Mixed Models (GAMMs). Sea surface temperature was significant, with the probability of shark sightings increasing rapidly as SST exceeded 14 °C and approached a maximum at 18 °C, whereafter it remains high. An 8 times (Muizenberg) and 5 times (Fish Hoek) greater likelihood of sighting a shark was predicted at 18 °C than at 14 °C. Lunar phase was also significant with a prediction of 1.5 times (Muizenberg) and 4 times (Fish Hoek) greater likelihood of a shark sighting at new moon than at full moon. At Fish Hoek, the probability of sighting a shark was 1.6 times higher during the afternoon shift compared to the morning shift, but no diel effect was found at Muizenberg. A significant increase in the number of shark sightings was identified over the last three years, highlighting the need for ongoing research into shark attack mitigation. These patterns will be incorporated into shark awareness and bather safety campaigns in Cape Town.

  12. The Influence of Environmental Variables on the Presence of White Sharks, Carcharodon carcharias at Two Popular Cape Town Bathing Beaches: A Generalized Additive Mixed Model

    PubMed Central

    Weltz, Kay; Kock, Alison A.; Winker, Henning; Attwood, Colin; Sikweyiya, Monwabisi

    2013-01-01

    Shark attacks on humans are high profile events which can significantly influence policies related to the coastal zone. A shark warning system in South Africa, Shark Spotters, recorded 378 white shark (Carcharodon carcharias) sightings at two popular beaches, Fish Hoek and Muizenberg, during 3690 six-hour long spotting shifts, during the months September to May 2006 to 2011. The probabilities of shark sightings were related to environmental variables using Binomial Generalized Additive Mixed Models (GAMMs). Sea surface temperature was significant, with the probability of shark sightings increasing rapidly as SST exceeded 14°C and approached a maximum at 18°C, whereafter it remains high. An 8 times (Muizenberg) and 5 times (Fish Hoek) greater likelihood of sighting a shark was predicted at 18°C than at 14°C. Lunar phase was also significant with a prediction of 1.5 times (Muizenberg) and 4 times (Fish Hoek) greater likelihood of a shark sighting at new moon than at full moon. At Fish Hoek, the probability of sighting a shark was 1.6 times higher during the afternoon shift compared to the morning shift, but no diel effect was found at Muizenberg. A significant increase in the number of shark sightings was identified over the last three years, highlighting the need for ongoing research into shark attack mitigation. These patterns will be incorporated into shark awareness and bather safety campaigns in Cape Town. PMID:23874668

  13. Symmetric Fold/Super-Hopf Bursting, Chaos and Mixed-Mode Oscillations in Pernarowski Model of Pancreatic Beta-Cells

    NASA Astrophysics Data System (ADS)

    Fallah, Haniyeh

    Pancreatic beta-cells produce insulin to regularize the blood glucose level. Bursting is important in beta cells due to its relation to the release of insulin. Pernarowski model is a simple polynomial model of beta-cell activities indicating bursting oscillations in these cells. This paper presents bursting behaviors of symmetric type in this model. In addition, it is shown that the current system exhibits the phenomenon of period doubling cascades of canards which is a route to chaos. Canards are also observed symmetrically near folds of slow manifold which results in a chaotic transition between n and n + 1 spikes symmetric bursting. Furthermore, mixed-mode oscillations (MMOs) and combination of symmetric bursting together with MMOs are illustrated during the transition between symmetric bursting and continuous spiking.

  14. Faithful Transfer Arbitrary Pure States with Mixed Resources

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Li, Lin; Ma, Song-Ya; Chen, Xiu-Bo; Yang, Yi-Xian

    2013-09-01

    In this paper, we show that some special mixed quantum resource experience the same property of pure entanglement such as Bell state for quantum teleportation. It is shown that one mixed state and three bits of classical communication cost can be used to teleport one unknown qubit compared with two bits via pure resources. The schemes are easily implement with model physical techniques. Moreover, these resources are also optimal and typical for faithfully remotely prepare an arbitrary qubit, two-qubit and three-qubit states with mixed quantum resources. Our schemes are completed as same as those with pure quantum entanglement resources except only 1 bit additional classical communication cost required. The success probability is independent of the form of the mixed resources.

  15. The Performance of IRT Model Selection Methods with Mixed-Format Tests

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2012-01-01

    When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…

  16. Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.

    2016-02-01

    Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.

  17. Estimation of oceanic subsurface mixing under a severe cyclonic storm using a coupled atmosphere-ocean-wave model

    NASA Astrophysics Data System (ADS)

    Prakash, Kumar Ravi; Nigam, Tanuja; Pant, Vimlesh

    2018-04-01

    A coupled atmosphere-ocean-wave model was used to examine mixing in the upper-oceanic layers under the influence of a very severe cyclonic storm Phailin over the Bay of Bengal (BoB) during 10-14 October 2013. The coupled model was found to improve the sea surface temperature over the uncoupled model. Model simulations highlight the prominent role of cyclone-induced near-inertial oscillations in subsurface mixing up to the thermocline depth. The inertial mixing introduced by the cyclone played a central role in the deepening of the thermocline and mixed layer depth by 40 and 15 m, respectively. For the first time over the BoB, a detailed analysis of inertial oscillation kinetic energy generation, propagation, and dissipation was carried out using an atmosphere-ocean-wave coupled model during a cyclone. A quantitative estimate of kinetic energy in the oceanic water column, its propagation, and its dissipation mechanisms were explained using the coupled atmosphere-ocean-wave model. The large shear generated by the inertial oscillations was found to overcome the stratification and initiate mixing at the base of the mixed layer. Greater mixing was found at the depths where the eddy kinetic diffusivity was large. The baroclinic current, holding a larger fraction of kinetic energy than the barotropic current, weakened rapidly after the passage of the cyclone. The shear induced by inertial oscillations was found to decrease rapidly with increasing depth below the thermocline. The dampening of the mixing process below the thermocline was explained through the enhanced dissipation rate of turbulent kinetic energy upon approaching the thermocline layer. The wave-current interaction and nonlinear wave-wave interaction were found to affect the process of downward mixing and cause the dissipation of inertial oscillations.

  18. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the

  19. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  20. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  1. An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.

    2016-11-01

    Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.

  2. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  3. Model of Mixing Layer With Multicomponent Evaporating Drops

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Le Clercq, Patrick

    2004-01-01

    A mathematical model of a three-dimensional mixing layer laden with evaporating fuel drops composed of many chemical species has been derived. The study is motivated by the fact that typical real petroleum fuels contain hundreds of chemical species. Previously, for the sake of computational efficiency, spray studies were performed using either models based on a single representative species or models based on surrogate fuels of at most 15 species. The present multicomponent model makes it possible to perform more realistic simulations by accounting for hundreds of chemical species in a computationally efficient manner. The model is used to perform Direct Numerical Simulations in continuing studies directed toward understanding the behavior of liquid petroleum fuel sprays. The model includes governing equations formulated in an Eulerian and a Lagrangian reference frame for the gas and the drops, respectively. This representation is consistent with the expected volumetrically small loading of the drops in gas (of the order of 10 3), although the mass loading can be substantial because of the high ratio (of the order of 103) between the densities of liquid and gas. The drops are treated as point sources of mass, momentum, and energy; this representation is consistent with the drop size being smaller than the Kolmogorov scale. Unsteady drag, added-mass effects, Basset history forces, and collisions between the drops are neglected, and the gas is assumed calorically perfect. The model incorporates the concept of continuous thermodynamics, according to which the chemical composition of a fuel is described probabilistically, by use of a distribution function. Distribution functions generally depend on many parameters. However, for mixtures of homologous species, the distribution can be approximated with acceptable accuracy as a sole function of the molecular weight. The mixing layer is initially laden with drops in its lower stream, and the drops are colder than the gas

  4. Effects of mixing in threshold models of social behavior

    NASA Astrophysics Data System (ADS)

    Akhmetzhanov, Andrei R.; Worden, Lee; Dushoff, Jonathan

    2013-07-01

    We consider the dynamics of an extension of the influential Granovetter model of social behavior, where individuals are affected by their personal preferences and observation of the neighbors’ behavior. Individuals are arranged in a network (usually the square lattice), and each has a state and a fixed threshold for behavior changes. We simulate the system asynchronously by picking a random individual and we either update its state or exchange it with another randomly chosen individual (mixing). We describe the dynamics analytically in the fast-mixing limit by using the mean-field approximation and investigate it mainly numerically in the case of finite mixing. We show that the dynamics converge to a manifold in state space, which determines the possible equilibria, and show how to estimate the projection of this manifold by using simulated trajectories, emitted from different initial points. We show that the effects of considering the network can be decomposed into finite-neighborhood effects, and finite-mixing-rate effects, which have qualitatively similar effects. Both of these effects increase the tendency of the system to move from a less-desired equilibrium to the “ground state.” Our findings can be used to probe shifts in behavioral norms and have implications for the role of information flow in determining when social norms that have become unpopular in particular communities (such as foot binding or female genital cutting) persist or vanish.

  5. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less

  6. COMPUTATIONAL FLUID DYNAMICS MODELING OF SCALED HANFORD DOUBLE SHELL TANK MIXING - CFD MODELING SENSITIVITY STUDY RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JACKSON VL

    2011-08-31

    The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance atmore » full-scale.« less

  7. Modulation of Additive and Interactive Effects in Lexical Decision by Trial History

    ERIC Educational Resources Information Center

    Masson, Michael E. J.; Kliegl, Reinhold

    2013-01-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model…

  8. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  9. System dynamics of behaviour-evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Cheng-Ling; Gao, Jie-Ping; Chen, Fang

    2010-11-01

    In real financial markets there are two kinds of traders: one is fundamentalist, and the other is a trend-follower. The mix-game model is proposed to mimic such phenomena. In a mix-game model there are two groups of agents: Group 1 plays the majority game and Group 2 plays the minority game. In this paper, we investigate such a case that some traders in real financial markets could change their investment behaviours by assigning the evolutionary abilities to agents: if the winning rates of agents are smaller than a threshold, they will join the other group; and agents will repeat such an evolution at certain time intervals. Through the simulations, we obtain the following findings: (i) the volatilities of systems increase with the increase of the number of agents in Group 1 and the times of behavioural changes of all agents; (ii) the performances of agents in both groups and the stabilities of systems become better if all agents take more time to observe their new investment behaviours; (iii) there are two-phase zones of market and non-market and two-phase zones of evolution and non-evolution; (iv) parameter configurations located within the cross areas between the zones of markets and the zones of evolution are suited for simulating the financial markets.

  10. Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber

    NASA Technical Reports Server (NTRS)

    Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.

    2006-01-01

    Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with

  11. HYDRAULICS AND MIXING EVALUATIONS FOR NT-21/41 TANKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.; Barnes, O.

    2014-11-17

    The hydraulic results demonstrate that pump head pressure of 20 psi recirculates about 5.6 liters/min flowrate through the existing 0.131-inch orifice when a valve connected to NT-41 is closed. In case of the valve open to NT-41, the solution flowrates to HB-Line tanks, NT-21 and NT-41, are found to be about 0.5 lpm and 5.2 lpm, respectively. The modeling calculations for the mixing operations of miscible fluids contained in the HB-Line tank NT-21 were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test resultsmore » to validate the model. Final performance calculations were performed for the nominal case by using the validated model to quantify the mixing time for the HB-Line tank. The results demonstrate that when a pump recirculates a solution volume of 5.7 liters every minute out of the 72-liter tank contents containing two acid solutions of 2.7 M and 0 M concentrations (i.e., water), a minimum mixing time of 1.5 hours is adequate for the tank contents to get the tank contents adequately mixed. In addition, the sensitivity results for the tank contents of 8 M existing solution and 1.5 M incoming species show that the mixing time takes about 2 hours to get the solutions mixed.« less

  12. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  14. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  15. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Cost benefit analysis of anti-strip additives in hot mix asphalt with various aggregates.

    DOT National Transportation Integrated Search

    2015-05-01

    This report documents research on moisture sensitivity testing of hot-mix asphalt (HMA) mixes in Pennsylvania and the : associated use of antistrip. The primary objective of the research was to evaluate and compare benefit/cost ratios of mandatory us...

  17. Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity

    NASA Technical Reports Server (NTRS)

    Kartuzova, O.; Kassemi, M.

    2016-01-01

    A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.

  18. Mix Models Applied to the Pushered Single Shell Capsules Fired on NIF1

    NASA Astrophysics Data System (ADS)

    Tipton, Robert; Dewald, Eduard; Pino, Jesse; Ralph, Joe; Sacks, Ryan; Salmonson, Jay

    2017-10-01

    The goal of the Pushered Single Shell (PSS) experimental campaign is to study the mix of partially ionized ablator material into the hotspot. To accomplish this goal, we used a uniformly Si doped plastic capsule based on the successful Two-Shock campaign. The inner few microns of the capsule can be doped with a few percent Ge. To diagnose mix, we used the method of separated reactants; deuterating the inner Ge-doped layer, CD/Ge, while using a gas fill of Tritium and Hydrogen. Mix is inferred by measuring the neutron yields from DD, DT, and TT reactions. The PSS implosion is fast ( 400 km/sec), hot ( 3KeV) and round (P2 0). This paper will present the calculations of RANS type mix models such as KL along with LES models such as multicomponent Navier Stokes on several PSS shots. The calculations will be compared to each other and to the measured data. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  19. Mixing Phenomena in a Bottom Blown Copper Smelter: A Water Model Study

    NASA Astrophysics Data System (ADS)

    Shui, Lang; Cui, Zhixiang; Ma, Xiaodong; Akbar Rhamdhani, M.; Nguyen, Anh; Zhao, Baojun

    2015-03-01

    The first commercial bottom blown oxygen copper smelting furnace has been installed and operated at Dongying Fangyuan Nonferrous Metals since 2008. Significant advantages have been demonstrated in this technology mainly due to its bottom blown oxygen-enriched gas. In this study, a scaled-down 1:12 model was set up to simulate the flow behavior for understanding the mixing phenomena in the furnace. A single lance was used in the present study for gas blowing to establish a reliable research technique and quantitative characterisation of the mixing behavior. Operating parameters such as horizontal distance from the blowing lance, detector depth, bath height, and gas flow rate were adjusted to investigate the mixing time under different conditions. It was found that when the horizontal distance between the lance and detector is within an effective stirring range, the mixing time decreases slightly with increasing the horizontal distance. Outside this range, the mixing time was found to increase with increasing the horizontal distance and it is more significant on the surface. The mixing time always decreases with increasing gas flow rate and bath height. An empirical relationship of mixing time as functions of gas flow rate and bath height has been established first time for the horizontal bottom blowing furnace.

  20. A novel modeling approach to the mixing process in twin-screw extruders

    NASA Astrophysics Data System (ADS)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  1. SYNTHESIS OF MIXED FULL AND SEMIESTERS OF PHOSPHOROUS ACID AS ORGANIC MOTOR OIL ADDITIVES,

    DTIC Science & Technology

    The synthesis of mixed full and semiesters of phosphonic acid was effected using alkylphenols produced by the chemical industry. By condensation of...industrial alkylphenol or the condensation of acid chloride of di-(alkylphenyl)-phosphorous acid with diethylamine, the corresponding mixed full and semiesters

  2. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  5. Intercomparison of garnet barometers and implications for garnet mixing models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anovitz, L.M.; Essene, E.J.

    1985-01-01

    Several well-calibrated barometers are available in the system Ca-Fe-Ti-Al-Si-O, including: Alm+3Ru-3Ilm+Sil+2Qtz (GRAIL), 2Alm+Grreverse arrow6Ru=6Ilm+3An+3Qtz (GRIPS); 2Alm+Gr=3Fa+3An (FAG); 3AnGr+Ky+Qtz (GASP); 2Fs-Fa+Qtz (FFQ); and Gr+Qtz=An+2Wo (WAGS). GRIPS, GRAIL and GASP form a linearly dependent set such that any two should yield the third given an a/X model for the grossular/almandine solid-solution. Application to barometry of garnet granulite assemblages from the Grenville in Ontario yields average pressures 0.1 kb lower for GRIPS and 0.4 kb higher for FAGS using our mixing model. Results from Parry Island, Ontario, yield 8.7 kb from GRAIL as opposed to 9.1 kb using Ganguly and Saxena's model. Formore » GASP, Parry Island assemblages yield 8.4 kb with the authors calibration. Ganguly and Saxena's model gives 5.4 kb using Gasparik's reversals and 8.1 kb using the position of GASP calculated from GRIPS and GRAIL. These corrections allow GRIPS, GRAIL, GASP and FAGS to yield consistent pressures to +/- 0.5 kb in regional metamorphic terranes. Application of their mixing model outside of the fitted range 700-1000 K is not encouraged as extrapolation may yield erroneous results.« less

  6. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  7. Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model

    NASA Astrophysics Data System (ADS)

    Kavalis, A. E.

    1983-06-01

    An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.

  8. Formation of parametric images using mixed-effects models: a feasibility study.

    PubMed

    Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh

    2016-03-01

    Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Extrusion-mixing compared with hand-mixing of polyether impression materials?

    PubMed

    McMahon, Caroline; Kinsella, Daniel; Fleming, Garry J P

    2010-12-01

    The hypotheses tested were two-fold (a) whether altering the base:catalyst ratio influences working time, elastic recovery and strain in compression properties of a hand-mixed polyether impression material and (b) whether an extrusion-mixed polyether impression material would have a significant advantage over a hand-mixed polyether impression material mixed to the optimum base:catalyst ratio. The polyether was hand-mixed at the optimum (manufacturers recommended) base:catalyst ratios (7:1) and further groups were made by increasing or decreasing the catalyst length by 25%. Additionally specimens were also made from an extrusion-mixed polyether impression material and compared with the optimum hand-mixed base:catalyst ratio. A penetrometer assembly was used to measure the working time (n=5). Five cylindrical specimens for each hand-mixed and extrusion mixed group investigated were employed for elastic recovery and strain in compression testing. Hand-mixing polyether impression materials with 25% more catalyst than that recommended significantly decreased the working time while hand-mixing with 25% less catalyst than that recommended significantly increased the strain in compression. The extrusion-mixed polyether impression material provided similar working time, elastic recovery and strain in compression to the hand-mixed polyether mixed at the optimum base:catalyst ratio.

  10. The Mixing of Regolith on the Moon and Beyond; A Model Refreshed

    NASA Astrophysics Data System (ADS)

    Costello, E.; Ghent, R. R.; Lucey, P. G.

    2017-12-01

    Meteoritic impactors constantly mix the lunar regolith, affecting stratigraphy, the lifetime of rays and other anomalous surface features, and the burial, exposure, and break down of volatiles and rocks. In this work we revisit the pioneering regolith mixing model presented by Gault et al. (1974), with updated assumptions and input parameters. Our updates significantly widen the parameter space and allow us to explore mixing as it is driven by different impactors in different materials (e.g. radar-dark halos and melt ponds). The updated treatment of micrometeorites suggests a very high rate of processing at the immediate lunar surface, with implications for rock breakdown and regolith production on melt ponds. We find that the inclusion of secondary impacts has a very strong effect on the rate and magnitude of mixing at all depths and timescales. Our calculations are in good agreement with the timescale of reworking in the top 2-3 cm of regolith that was predicted by observations of LROC temporal pairs and by the depth profile of 26Al abundance in Apollo drill cores. Further, our calculations with secondaries included are consistent with the depth profile of in situ exposure age calculated from Is/FeO and cosmic track abundance in Apollo deep drill cores down to 50cm. The mixing we predict is also consistent with the erasure of density anomalies, or `cold spots', observed in the top decimeters of regolith by LRO Diviner, and the 1Gyr lifetime of 1-10m thick Copernican rays. This exploration of Moon's surface evolution has profound implications for our understanding of other planetary bodies. We take advantage of this computationally inexpensive analytic model and apply it to describe mixing on a variety of bodies across the solar system; including asteroids, Mercury, and Europa. We use the results of ongoing studies that describe porosity calculations and cratering laws in porous asteroid-like material to explore the reworking rate experienced by an asteroid. On

  11. Exploring compositional variations on the surface of Mars applying mixing modeling to a telescopic spectral image

    NASA Technical Reports Server (NTRS)

    Merenyi, E.; Miller, J. S.; Singer, R. B.

    1992-01-01

    The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.

  12. Manpower Mix for Health Services

    PubMed Central

    Shuman, Larry J.; Young, John P.; Naddor, Eliezer

    1971-01-01

    A model is formulated to determine the mix of manpower and technology needed to provide health services of acceptable quality at a minimum total cost to the community. Total costs include both the direct costs associated with providing the services and with developing additional manpower and the indirect costs (shortage costs) resulting from not providing needed services. The model is applied to a hypothetical neighborhood health center, and its sensitivity to alternative policies is investigated by cost-benefit analyses. Possible extensions of the model to include dynamic elements in health delivery systems are discussed, as is its adaptation for use in hospital planning, with a changed objective function. PMID:5095652

  13. Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Sinha, N.; Dash, S. M.

    1988-01-01

    Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.

  14. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  15. A new long-term care facilities model in nova scotia, Canada: protocol for a mixed methods study of care by design.

    PubMed

    Marshall, Emily Gard; Boudreau, Michelle Anne; Jensen, Jan L; Edgecombe, Nancy; Clarke, Barry; Burge, Frederick; Archibald, Greg; Taylor, Anthony; Andrew, Melissa K

    2013-11-29

    Prior to the implementation of a new model of care in long-term care facilities in the Capital District Health Authority, Halifax, Nova Scotia, residents entering long-term care were responsible for finding their own family physician. As a result, care was provided by many family physicians responsible for a few residents leading to care coordination and continuity challenges. In 2009, Capital District Health Authority (CDHA) implemented a new model of long-term care called "Care by Design" which includes: a dedicated family physician per floor, 24/7 on-call physician coverage, implementation of a standardized geriatric assessment tool, and an interdisciplinary team approach to care. In addition, a new Emergency Health Services program was implemented shortly after, in which specially trained paramedics dedicated to long-term care responses are able to address urgent care needs. These changes were implemented to improve primary and emergency care for vulnerable residents. Here we describe a comprehensive mixed methods research study designed to assess the impact of these programs on care delivery and resident outcomes. The results of this research will be important to guide primary care policy for long-term care. We aim to evaluate the impact of introducing a new model of a dedicated primary care physician and team approach to long-term care facilities in the CDHA using a mixed methods approach. As a mixed methods study, the quantitative and qualitative data findings will inform each other. Quantitatively we will measure a number of indicators of care in CDHA long-term care facilities pre and post-implementation of the new model. In the qualitative phase of the study we will explore the experience under the new model from the perspectives of stakeholders including family doctors, nurses, administration and staff as well as residents and family members. The proposed mixed method study seeks to evaluate and make policy recommendations related to primary care in long

  16. Who mixes with whom among men who have sex with men? Implications for modelling the HIV epidemic in southern India

    PubMed Central

    Mitchell, K.M.; Foss, A.M.; Prudden, H.J.; Mukandavire, Z.; Pickles, M.; Williams, J.R.; Johnson, H.C.; Ramesh, B.M.; Washington, R.; Isac, S.; Rajaram, S.; Phillips, A.E.; Bradley, J.; Alary, M.; Moses, S.; Lowndes, C.M.; Watts, C.H.; Boily, M.-C.; Vickerman, P.

    2014-01-01

    In India, the identity of men who have sex with men (MSM) is closely related to the role taken in anal sex (insertive, receptive or both), but little is known about sexual mixing between identity groups. Both role segregation (taking only the insertive or receptive role) and the extent of assortative (within-group) mixing are known to affect HIV epidemic size in other settings and populations. This study explores how different possible mixing scenarios, consistent with behavioural data collected in Bangalore, south India, affect both the HIV epidemic, and the impact of a targeted intervention. Deterministic models describing HIV transmission between three MSM identity groups (mostly insertive Panthis/Bisexuals, mostly receptive Kothis/Hijras and versatile Double Deckers), were parameterised with behavioural data from Bangalore. We extended previous models of MSM role segregation to allow each of the identity groups to have both insertive and receptive acts, in differing ratios, in line with field data. The models were used to explore four different mixing scenarios ranging from assortative (maximising within-group mixing) to disassortative (minimising within-group mixing). A simple model was used to obtain insights into the relationship between the degree of within-group mixing, R0 and equilibrium HIV prevalence under different mixing scenarios. A more complex, extended version of the model was used to compare the predicted HIV prevalence trends and impact of an HIV intervention when fitted to data from Bangalore. With the simple model, mixing scenarios with increased amounts of assortative (within-group) mixing tended to give rise to a higher R0 and increased the likelihood that an epidemic would occur. When the complex model was fit to HIV prevalence data, large differences in the level of assortative mixing were seen between the fits identified using different mixing scenarios, but little difference was projected in future HIV prevalence trends. An oral pre

  17. Mixed polymer brushes by sequential polymer addition: anchoring layer effect.

    PubMed

    Draper, John; Luzinov, Igor; Minko, Sergiy; Tokarev, Igor; Stamm, Manfred

    2004-05-11

    Smart surfaces can be described as surfaces that have the ability to respond in a controllable fashion to specific environmental stimuli. A heterogeneous (mixed) polymer brush (HPB) can provide a synthetic route to designing smart polymer surfaces. In this research we study HPB comprised of end-grafted polystyrene (PS) and poly(2-vinyl pyridine) (P2VP). The synthesis of the HPB involves the use of an "intermolecular glue" acting as a binding/anchoring interlayer between the polymer brush and the substrate, a silicon wafer. We compare anchoring layers of epoxysilane (GPS), which forms a self-assembled monolayer with epoxy functionality, to poly(glycidyl methacrylate) (PGMA), which forms a macromolecular monolayer with epoxy functionality. The PS and P2VP were deposited onto the wafers in a sequential fashion to chemically graft PS in a first step and subsequently graft P2VP. Rinsing the HPB in selective solvents and observing the change in water contact angle as a function of the HPB composition studied the switching nature of the HPB. Scanning probe microscopy was used to probe the topography and phase imagery of the HPB. The nature of the anchoring layer significantly affected the wettability and morphology of the mixed brushes.

  18. A size-composition resolved aerosol model for simulating the dynamics of externally mixed particles: SCRAM (v 1.0)

    NASA Astrophysics Data System (ADS)

    Zhu, S.; Sartelet, K. N.; Seigneur, C.

    2015-06-01

    The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.

  19. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  20. Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Wolf, D. E.

    1984-01-01

    A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.

  1. Multiple component end-member mixing model of dilution: hydrochemical effects of construction water at Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2008-12-01

    The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.

  2. Incorporating concentration dependence in stable isotope mixing models.

    PubMed

    Phillips, Donald L; Koch, Paul L

    2002-01-01

    Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model assumes that the proportional contribution of a source to a mixture is the same for both elements (e.g., C, N). This may be a reasonable assumption if the concentrations are similar among all sources. However, one source is often particularly rich or poor in one element (e.g., N), which logically leads to a proportionate increase or decrease in the contribution of that source to the mixture for that element relative to the other element (e.g., C). We have developed a concentration-weighted linear mixing model, which assumes that for each element, a source's contribution is proportional to the contributed mass times the elemental concentration in that source. The model is outlined for two elements and three sources, but can be generalized to n elements and n+1 sources. Sensitivity analyses for C and N in three sources indicated that varying the N concentration of just one source had large and differing effects on the estimated source contributions of mass, C, and N. The same was true for a case study of bears feeding on salmon, moose, and N-poor plants. In this example, the estimated biomass contribution of salmon from the concentration-weighted model was markedly less than the standard model estimate. Application of the model to a captive feeding study of captive mink fed on salmon, lean beef, and C-rich, N-poor beef fat reproduced very closely the known dietary proportions, whereas the standard model failed to yield a set of positive source proportions. Use of this concentration-weighted model is recommended whenever the elemental concentrations vary substantially among the sources, which may occur in a variety of ecological and geochemical applications of stable isotope

  3. Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis

    PubMed Central

    Singmann, Henrik; Klauer, Karl Christoph; Kellen, David

    2014-01-01

    Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handley's claim. PMID:24755777

  4. Modelling Kepler red giants in eclipsing binaries: calibrating the mixing-length parameter with asteroseismology

    NASA Astrophysics Data System (ADS)

    Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss

    2018-03-01

    Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.

  5. Effect of mixing method on the mixing degree during the preparation of triturations.

    PubMed

    Nakamura, Hitoshi; Yanagihara, Yoshitsugu; Sekiguchi, Hiroko; Komada, Fusao; Kawabata, Haruno; Ohtani, Michiteru; Saitoh, Yukiya; Kariya, Satoru; Suzuki, Hiroshi; Uchino, Katsuyoshi; Iga, Tatsuji

    2004-03-01

    By using lactose colored with erythrocin, we investigated the effects of mixing methods on mixing degree during the preparation of trituration with a mortar and pestle. The extent of powder dilution was set to 4 to 64 fold in the experiments. We compared the results obtained by using two methods: (1) one-step mixing of powders after addition of diluents and (2) gradual mixing of powders after addition of diluents. As diluents, we used crystallized lactose and powdered lactose for the preparation of trituration. In the preparation of 64-fold trituration, an excellent degree of mixing was obtained, with CV values of less than 6.08%, for both preparation methods and for the two kinds of diluents. The mixing of two kinds of powders whose distributions of particle sizes were similar resulted in much better degree of mixing, with CV values of less than 3.0%. However, the concentration of principal agents in 64-fold trituration was reduced by 20% due to the adsorption of dye to the apparatus. Under conditions in which a much higher dilution rate and/or much better degree of dilution was required, it must be necessary to dilute powders with considering their physicality and to determine the concentrations of principal agents after the mixing.

  6. Incompletely Mixed Surface Transient Storage Zones at River Restoration Structures: Modeling Implications

    NASA Astrophysics Data System (ADS)

    Endreny, T. A.; Robinson, J.

    2012-12-01

    River restoration structures, also known as river steering deflectors, are designed to reduce bank shear stress by generating wake zones between the bank and the constricted conveyance region. There is interest in characterizing the surface transient storage (STS) and associated biogeochemical processing in the STS zones around these structures to quantify the ecosystem benefits of river restoration. This research explored how the hydraulics around river restoration structures prohibits application of transient storage models designed for homogenous, completely mixed STS zones. We used slug and constant rate injections of a conservative tracer in a 3rd order river in Onondaga County, NY over the course of five experiments at varying flow regimes. Recovered breakthrough curves spanned a transect including the main channel and wake zone at a j-hook restoration structure. We noted divergent patterns of peak solute concentration and times within the wake zone regardless of transect location within the structure. Analysis reveals an inhomogeneous STS zone which is frequently still loading tracer after the main channel has peaked. The breakthrough curve loading patterns at the restoration structure violated the assumptions of simplified "random walk" 2 zone transient storage models which seek to identify representative STS zones and zone locations. Use of structure-scale Weiner filter based multi-rate mass transfer models to characterize STS zones residence times are similarly dependent on a representative zone location. Each 2 zone model assumes 1 zone is a completely mixed STS zone and the other a completely mixed main channel. Our research reveals limits to simple application of the recently developed 2 zone models, and raises important questions about the measurement scale necessary to identify critical STS properties at restoration sites. An explanation for the incompletely mixed STS zone may be the distinct hydraulics at restoration sites, including a constrained

  7. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  8. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  9. Numerical simulation of the non-Newtonian mixing layer

    NASA Technical Reports Server (NTRS)

    Azaiez, Jalel; Homsy, G. M.

    1993-01-01

    This work is a continuing effort to advance our understanding of the effects of polymer additives on the structures of the mixing layer. In anticipation of full nonlinear simulations of the non-Newtonian mixing layer, we examined in a first stage the linear stability of the non-Newtonian mixing layer. The results of this study show that, for a fluid described by the Oldroyd-B model, viscoelasticity reduces the instability of the inviscid mixing layer in a special limit where the ratio (We/Re) is of order 1 where We is the Weissenberg number, a measure of the elasticity of the flow, and Re is the Reynolds number. In the present study, we pursue this project with numerical simulations of the non-Newtonian mixing layer. Our primary objective is to determine the effects of viscoelasticity on the roll-up structure. We also examine the origin of the numerical instabilities usually encountered in the simulations of non-Newtonian fluids.

  10. Model for compressible turbulence in hypersonic wall boundary and high-speed mixing layers

    NASA Astrophysics Data System (ADS)

    Bowersox, Rodney D. W.; Schetz, Joseph A.

    1994-07-01

    The most common approach to Navier-Stokes predictions of turbulent flows is based on either the classical Reynolds-or Favre-averaged Navier-Stokes equations or some combination. The main goal of the current work was to numerically assess the effects of the compressible turbulence terms that were experimentaly found to be important. The compressible apparent mass mixing length extension (CAMMLE) model, which was based on measured experimental data, was found to produce accurate predictions of the measured compressible turbulence data for both the wall bounded and free mixing layer. Hence, that model was incorporated into a finite volume Navier-Stokes code.

  11. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    PubMed

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  12. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  13. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  14. Euler-Lagrange CFD modelling of unconfined gas mixing in anaerobic digestion.

    PubMed

    Dapelo, Davide; Alberini, Federico; Bridgeman, John

    2015-11-15

    A novel Euler-Lagrangian (EL) computational fluid dynamics (CFD) finite volume-based model to simulate the gas mixing of sludge for anaerobic digestion is developed and described. Fluid motion is driven by momentum transfer from bubbles to liquid. Model validation is undertaken by assessing the flow field in a labscale model with particle image velocimetry (PIV). Conclusions are drawn about the upscaling and applicability of the model to full-scale problems, and recommendations are given for optimum application. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  16. Sensitivity of Above-Ground Biomass Estimates to Height-Diameter Modelling in Mixed-Species West African Woodlands

    PubMed Central

    Aynekulu, Ermias; Pitkänen, Sari; Packalen, Petteri

    2016-01-01

    It has been suggested that above-ground biomass (AGB) inventories should include tree height (H), in addition to diameter (D). As H is a difficult variable to measure, H-D models are commonly used to predict H. We tested a number of approaches for H-D modelling, including additive terms which increased the complexity of the model, and observed how differences in tree-level predictions of H propagated to plot-level AGB estimations. We were especially interested in detecting whether the choice of method can lead to bias. The compared approaches listed in the order of increasing complexity were: (B0) AGB estimations from D-only; (B1) involving also H obtained from a fixed-effects H-D model; (B2) involving also species; (B3) including also between-plot variability as random effects; and (B4) involving multilevel nested random effects for grouping plots in clusters. In light of the results, the modelling approach affected the AGB estimation significantly in some cases, although differences were negligible for some of the alternatives. The most important differences were found between including H or not in the AGB estimation. We observed that AGB predictions without H information were very sensitive to the environmental stress parameter (E), which can induce a critical bias. Regarding the H-D modelling, the most relevant effect was found when species was included as an additive term. We presented a two-step methodology, which succeeded in identifying the species for which the general H-D relation was relevant to modify. Based on the results, our final choice was the single-level mixed-effects model (B3), which accounts for the species but also for the plot random effects reflecting site-specific factors such as soil properties and degree of disturbance. PMID:27367857

  17. Septic tank additive impacts on microbial populations.

    PubMed

    Pradhan, S; Hoover, M T; Clark, G H; Gumpertz, M; Wollum, A G; Cobb, C; Strock, J

    2008-01-01

    Environmental health specialists, other onsite wastewater professionals, scientists, and homeowners have questioned the effectiveness of septic tank additives. This paper describes an independent, third-party, field scale, research study of the effects of three liquid bacterial septic tank additives and a control (no additive) on septic tank microbial populations. Microbial populations were measured quarterly in a field study for 12 months in 48 full-size, functioning septic tanks. Bacterial populations in the 48 septic tanks were statistically analyzed with a mixed linear model. Additive effects were assessed for three septic tank maintenance levels (low, intermediate, and high). Dunnett's t-test for tank bacteria (alpha = .05) indicated that none of the treatments were significantly different, overall, from the control at the statistical level tested. In addition, the additives had no significant effects on septic tank bacterial populations at any of the septic tank maintenance levels. Additional controlled, field-based research iswarranted, however, to address additional additives and experimental conditions.

  18. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  19. Versatility of Cooperative Transcriptional Activation: A Thermodynamical Modeling Analysis for Greater-Than-Additive and Less-Than-Additive Effects

    PubMed Central

    Frank, Till D.; Carmody, Aimée M.; Kholodenko, Boris N.

    2012-01-01

    We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive responses when

  20. Models to understand the population-level impact of mixed strain M. tuberculosis infections.

    PubMed

    Sergeev, Rinat; Colijn, Caroline; Cohen, Ted

    2011-07-07

    Over the past decade, numerous studies have identified tuberculosis patients in whom more than one distinct strain of Mycobacterium tuberculosis is present. While it has been shown that these mixed strain infections can reduce the probability of treatment success for individuals simultaneously harboring both drug-sensitive and drug-resistant strains, it is not yet known if and how this phenomenon impacts the long-term dynamics for tuberculosis within communities. Strain-specific differences in immunogenicity and associations with drug resistance suggest that a better understanding of how strains compete within hosts will be necessary to project the effects of mixed strain infections on the future burden of drug-sensitive and drug-resistant tuberculosis. In this paper, we develop a modeling framework that allows us to investigate mechanisms of strain competition within hosts and to assess the long-term effects of such competition on the ecology of strains in a population. These models permit us to systematically evaluate the importance of unknown parameters and to suggest priority areas for future experimental research. Despite the current scarcity of data to inform the values of several model parameters, we are able to draw important qualitative conclusions from this work. We find that mixed strain infections may promote the coexistence of drug-sensitive and drug-resistant strains in two ways. First, mixed strain infections allow a strain with a lower basic reproductive number to persist in a population where it would otherwise be outcompeted if has competitive advantages within a co-infected host. Second, some individuals progressing to phenotypically drug-sensitive tuberculosis from a state of mixed drug-sensitive and drug-resistant infection may retain small subpopulations of drug-resistant bacteria that can flourish once the host is treated with antibiotics. We propose that these types of mixed infections, by increasing the ability of low fitness drug

  1. Modeling Magma Mixing: Evidence from U-series age dating and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Philipp, R.; Cooper, K. M.; Bergantz, G. W.

    2007-12-01

    Magma mixing and recharge is an ubiquitous process in the shallow crust, which can trigger eruption and cause magma hybridization. Phenocrysts in mixed magmas are recorders for magma mixing and can be studied by in- situ techniques and analyses of bulk mineral separates. To better understand if micro-textural and compositional information reflects local or reservoir-scale events, a physical model for gathering and dispersal of crystals is necessary. We present the results of a combined geochemical and fluid dynamical study of magma mixing processes at Volcan Quizapu, Chile; two large (1846/47 AD and 1932 AD) dacitic eruptions from the same vent area were triggered by andesitic recharge magma and show various degrees of magma mixing. Employing a multiphase numerical fluid dynamic model, we simulated a simple mixing process of vesiculated mafic magma intruded into a crystal-bearing silicic reservoir. This unstable condition leads to overturn and mixing. In a second step we use the velocity field obtained to calculate the flow path of 5000 crystals randomly distributed over the entire system. Those particles mimic the phenocryst response to the convective motion. There is little local relative motion between silicate liquid and crystals due to the high viscosity of the melts and the rapid overturn rate of the system. Of special interest is the crystal dispersal and gathering, which is quantified by comparing the distance at the beginning and end of the simulation for all particle pairs that are initially closer than a length scale chosen between 1 and 10 m. At the start of the simulation, both the resident and new intruding (mafic) magmas have a unique particle population. Depending on the Reynolds number (Re) and the chosen characteristic length scale of different phenocryst-pairs, we statistically describe the heterogeneity of crystal populations on the thin section scale. For large Re (approx. 25) and a short characteristic length scale of particle

  2. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  3. Trends in stratospheric ozone profiles using functional mixed models

    NASA Astrophysics Data System (ADS)

    Park, A.; Guillas, S.; Petropavlovskikh, I.

    2013-11-01

    This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkehr ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed. It penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data-driven basis functions (empirical basis functions) are obtained. The coefficients (principal component scores) corresponding to the empirical basis functions represent dominant temporal evolution in the shape of ozone profiles. We use those time series coefficients in the second statistical step to reveal the important sources of the patterns and variations in the profiles. We estimate the effects of covariates - month, year (trend), quasi-biennial oscillation, the solar cycle, the Arctic oscillation, the El Niño/Southern Oscillation cycle and the Eliassen-Palm flux - on the principal component scores of ozone profiles using additive mixed effects models. The effects are represented as smooth functions and the smooth functions are estimated by penalized regression splines. We also impose a heteroscedastic error structure that reflects the observed seasonality in the errors. The more complex error structure enables us to provide more accurate estimates of influences and trends, together with enhanced uncertainty quantification. Also, we are able to capture fine variations in the time evolution of the profiles, such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder and Arosa, as well as for total column ozone. There are great variations in the trends across altitudes, which highlights the benefits of modeling ozone

  4. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  5. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Treesearch

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  6. Effects of additional food in a delayed predator-prey model.

    PubMed

    Sahoo, Banshidhar; Poria, Swarup

    2015-03-01

    We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Addition of Sulfometuron Methyl to Fall Site Preparation Tank Mixes Improves Herbaceous Weed Control

    Treesearch

    A.W. Ezell

    2002-01-01

    A total of 12 herbicide treatments were applied to a recently harvested forest site in Winston County, MS. All treatments were representative of forest site preparation tank mixtures and were applied early September, 1999. Three ounces of Oust7 were included in two of the tank mixes, and 19 ounces of Oustar7 were included in two of the mixes. All treatments were...

  8. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  9. Consequences of non-random species loss for decomposition dynamics: Experimental evidence for additive and non-additive effects

    Treesearch

    Becky A. Ball; Mark D. Hunter; John S. Kominoski; Christopher M. Swan; Mark A. Bradford

    2008-01-01

    Although litter decomposition is a fundamental ecological process, most of our understanding comes from studies of single-species decay. Recently, litter-mixing studies have tested whether monoculture data can be applied to mixed-litter systems. These studies have mainly attempted to detect non-additive effects of litter mixing, which address potential consequences of...

  10. QCD sum-rules analysis of vector (1-) heavy quarkonium meson-hybrid mixing

    NASA Astrophysics Data System (ADS)

    Palameta, A.; Ho, J.; Harnett, D.; Steele, T. G.

    2018-02-01

    We use QCD Laplace sum rules to study meson-hybrid mixing in vector (1-) heavy quarkonium. We compute the QCD cross-correlator between a heavy meson current and a heavy hybrid current within the operator product expansion. In addition to leading-order perturbation theory, we include four- and six-dimensional gluon condensate contributions as well as a six-dimensional quark condensate contribution. We construct several single and multiresonance models that take known hadron masses as inputs. We investigate which resonances couple to both currents and so exhibit meson-hybrid mixing. Compared to single resonance models that include only the ground state, we find that models that also include excited states lead to significantly improved agreement between QCD and experiment. In the charmonium sector, we find that meson-hybrid mixing is consistent with a two-resonance model consisting of the J /ψ and a 4.3 GeV resonance. In the bottomonium sector, we find evidence for meson-hybrid mixing in the ϒ (1 S ) , ϒ (2 S ), ϒ (3 S ), and ϒ (4 S ).

  11. The effect of B 2O 3 addition on the crystallization of amorphous TiO 2-ZrO 2 mixed oxide

    NASA Astrophysics Data System (ADS)

    Mao, Dongsen; Lu, Guanzhong

    2007-02-01

    The effect of B 2O 3 addition on the crystallization of amorphous TiO 2-ZrO 2 mixed oxide was investigated by X-ray diffraction (XRD), thermogravimetric and differential thermal analysis (TG/DTA). TiO 2-ZrO 2 mixed oxide was prepared by co-precipitation method with aqueous ammonia as the precipitation reagent. Boric acid was used as a source of boria, and boria contents varied from 2 to 20 wt%. The results indicate that the addition of small amount of boria (<8 wt%) hinders the crystallization of amorphous TiO 2-ZrO 2 into a crystalline ZrTiO 4 compound, while a larger amount of boria (⩾8 wt%) promotes the crystallization process. FT-IR spectroscopy and 11B MAS NMR results show that tetrahedral borate species predominate at low boria loading, and trigonal borate species increase with increasing boria loading. Thus it is concluded that highly dispersed tetrahedral BO 4 units delay, while a build-up of trigonal BO 3 promote, the crystallization of amorphous TiO 2-ZrO 2 to form ZrTiO 4 crystals.

  12. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  13. Quantifying Diapycnal Mixing in an Energetic Ocean

    NASA Astrophysics Data System (ADS)

    Ivey, Gregory N.; Bluteau, Cynthia E.; Jones, Nicole L.

    2018-01-01

    Turbulent diapycnal mixing controls global circulation and the distribution of tracers in the ocean. For turbulence in stratified shear flows, we introduce a new turbulent length scale Lρ dependent on χ. We show the flux Richardson number Rif is determined by the dimensionless ratio of three length scales: the Ozmidov scale LO, the Corrsin shear scale LS, and Lρ. This new model predicts that Rif varies from 0 to 0.5, which we test primarily against energetic field observations collected in 100 m of water on the Australian North West Shelf (NWS), in addition to laboratory observations. The field observations consisted of turbulence microstructure vertical profiles taken near moored temperature and velocity turbulence time series. Irrespective of the value of the gradient Richardson number Ri, both instruments yielded a median Rif=0.17, while the observed Rif ranged from 0.01 to 0.50, in agreement with the predicted range of Rif. Using a Prandtl mixing length model, we show that diapycnal mixing Kρ can be predicted from Lρ and the background vertical shear S. Using field and laboratory observations, we show that Lρ=0.3LE where LE is the Ellison length scale. The diapycnal diffusivity can thus be calculated from Kρ=0.09LES2. This prediction agrees very well with the diapycnal mixing estimates obtained from our moored turbulence instruments for observed diffusivities as large as 10-1 m2s-1. Moorings with relatively low sampling rates can thus provide long time series estimates of diapycnal mixing rates, significantly increasing the number of diapycnal mixing estimates in the ocean.

  14. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  15. Selection of latent variables for multiple mixed-outcome models

    PubMed Central

    ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI

    2014-01-01

    Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219

  16. Markov Mixed Effects Modeling Using Electronic Adherence Monitoring Records Identifies Influential Covariates to HIV Preexposure Prophylaxis.

    PubMed

    Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W

    2017-05-01

    Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM ® 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors. © 2016, The American College of Clinical Pharmacology.

  17. Biofilm development and enhanced stress resistance of a model, mixed-species community biofilm.

    PubMed

    Lee, Kai Wei Kelvin; Periasamy, Saravanan; Mukherjee, Manisha; Xie, Chao; Kjelleberg, Staffan; Rice, Scott A

    2014-04-01

    Most studies of biofilm biology have taken a reductionist approach, where single-species biofilms have been extensively investigated. However, biofilms in nature mostly comprise multiple species, where interspecies interactions can shape the development, structure and function of these communities differently from biofilm populations. Hence, a reproducible mixed-species biofilm comprising Pseudomonas aeruginosa, Pseudomonas protegens and Klebsiella pneumoniae was adapted to study how interspecies interactions affect biofilm development, structure and stress responses. Each species was fluorescently tagged to determine its abundance and spatial localization within the biofilm. The mixed-species biofilm exhibited distinct structures that were not observed in comparable single-species biofilms. In addition, development of the mixed-species biofilm was delayed 1-2 days compared with the single-species biofilms. Composition and spatial organization of the mixed-species biofilm also changed along the flow cell channel, where nutrient conditions and growth rate of each species could have a part in community assembly. Intriguingly, the mixed-species biofilm was more resistant to the antimicrobials sodium dodecyl sulfate and tobramycin than the single-species biofilms. Crucially, such community level resilience was found to be a protection offered by the resistant species to the whole community rather than selection for the resistant species. In contrast, community-level resilience was not observed for mixed-species planktonic cultures. These findings suggest that community-level interactions, such as sharing of public goods, are unique to the structured biofilm community, where the members are closely associated with each other.

  18. A New Long-Term Care Facilities Model in Nova Scotia, Canada: Protocol for a Mixed Methods Study of Care by Design

    PubMed Central

    Boudreau, Michelle Anne; Jensen, Jan L; Edgecombe, Nancy; Clarke, Barry; Burge, Frederick; Archibald, Greg; Taylor, Anthony; Andrew, Melissa K

    2013-01-01

    Background Prior to the implementation of a new model of care in long-term care facilities in the Capital District Health Authority, Halifax, Nova Scotia, residents entering long-term care were responsible for finding their own family physician. As a result, care was provided by many family physicians responsible for a few residents leading to care coordination and continuity challenges. In 2009, Capital District Health Authority (CDHA) implemented a new model of long-term care called “Care by Design” which includes: a dedicated family physician per floor, 24/7 on-call physician coverage, implementation of a standardized geriatric assessment tool, and an interdisciplinary team approach to care. In addition, a new Emergency Health Services program was implemented shortly after, in which specially trained paramedics dedicated to long-term care responses are able to address urgent care needs. These changes were implemented to improve primary and emergency care for vulnerable residents. Here we describe a comprehensive mixed methods research study designed to assess the impact of these programs on care delivery and resident outcomes. The results of this research will be important to guide primary care policy for long-term care. Objective We aim to evaluate the impact of introducing a new model of a dedicated primary care physician and team approach to long-term care facilities in the CDHA using a mixed methods approach. As a mixed methods study, the quantitative and qualitative data findings will inform each other. Quantitatively we will measure a number of indicators of care in CDHA long-term care facilities pre and post-implementation of the new model. In the qualitative phase of the study we will explore the experience under the new model from the perspectives of stakeholders including family doctors, nurses, administration and staff as well as residents and family members. The proposed mixed method study seeks to evaluate and make policy recommendations related

  19. Mixing methodology, nursing theory and research design for a practice model of district nursing advocacy.

    PubMed

    Reed, Frances M; Fitzgerald, Les; Rae, Melanie

    2016-01-01

    To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.

  20. Re-resection rates after breast-conserving surgery as a performance indicator: introduction of a case-mix model to allow comparison between Dutch hospitals.

    PubMed

    Talsma, A K; Reedijk, A M J; Damhuis, R A M; Westenend, P J; Vles, W J

    2011-04-01

    Re-resection rate after breast-conserving surgery (BCS) has been introduced as an indicator of quality of surgical treatment in international literature. The present study aims to develop a case-mix model for re-resection rates and to evaluate its performance in comparing results between hospitals. Electronic records of eligible patients diagnosed with in-situ and invasive breast cancer in 2006 and 2007 were derived from 16 hospitals in the Rotterdam Cancer Registry (RCR) (n = 961). A model was built in which prognostic factors for re-resections after BCS were identified and expected re-resection rate could be assessed for hospitals based on their case mix. To illustrate the opportunities of monitoring re-resections over time, after risk adjustment for patient profile, a VLAD chart was drawn for patients in one hospital. In general three out of every ten women had re-surgery; in about 50% this meant an additive mastectomy. Independent prognostic factors of re-resection after multivariate analysis were histological type, sublocalisation, tumour size, lymph node involvement and multifocal disease. After correction for case mix, one hospital was performing significantly less re-resections compared to the reference hospital. On the other hand, two were performing significantly more re-resections than was expected based on their patient mix. Our population-based study confirms earlier reports that re-resection is frequently required after an initial breast-conserving operation. Case-mix models such as the one we constructed can be used to correct for variation between hospitals performances. VLAD charts are valuable tools to monitor quality of care within individual hospitals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Modeling Bimolecular Reactive Transport With Mixing-Limitation: Theory and Application to Column Experiments

    NASA Astrophysics Data System (ADS)

    Ginn, T. R.

    2018-01-01

    The challenge of determining mixing extent of solutions undergoing advective-dispersive-diffusive transport is well known. In particular, reaction extent between displacing and displaced solutes depends on mixing at the pore scale, that is, generally smaller than continuum scale quantification that relies on dispersive fluxes. Here a novel mobile-mobile mass transfer approach is developed to distinguish diffusive mixing from dispersive spreading in one-dimensional transport involving small-scale velocity variations with some correlation, such as occurs in hydrodynamic dispersion, in which short-range ballistic transports give rise to dispersed but not mixed segregation zones, termed here ballisticules. When considering transport of a single solution, this approach distinguishes self-diffusive mixing from spreading, and in the case of displacement of one solution by another, each containing a participant reactant of an irreversible bimolecular reaction, this results in time-delayed diffusive mixing of reactants. The approach generates models for both kinetically controlled and equilibrium irreversible reaction cases, while honoring independently measured reaction rates and dispersivities. The mathematical solution for the equilibrium case is a simple analytical expression. The approach is applied to published experimental data on bimolecular reactions for homogeneous porous media under postasymptotic dispersive conditions with good results.

  2. MILP model for integrated balancing and sequencing mixed-model two-sided assembly line with variable launching interval and assignment restrictions

    NASA Astrophysics Data System (ADS)

    Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.

    2017-09-01

    This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.

  3. Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.

    PubMed

    Gunjan, Madhu Ranjan; Raj, Rishi

    2017-07-18

    The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among

  4. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  5. Diesel engine emissions and combustion predictions using advanced mixing models applicable to fuel sprays

    NASA Astrophysics Data System (ADS)

    Abani, Neerav; Reitz, Rolf D.

    2010-09-01

    An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.

  6. A Stochastic Mixing Model for Predicting Emissions in a Direct Injection Diesel Engine.

    DTIC Science & Technology

    1986-09-01

    of chemical reactors. The fundamental concept of these models is coalescence/dis- persion micromixing . C1] Details of this method are provided in Appen...Togby,A.H., "Monte Carlo Methods of Simulating Micromixing in Chemical Reactors", Chemical Engineering Science, Vol.27, p.1 4 97, 1972. 46. Kattan,A...on a molecular level. 2. Micromixing or stream mixing refers to the mixing of particles on a molecular level. Until the coalescence and dispersion

  7. Mixing and solid-liquid mass-transfer rates in a creusot-loire uddeholm vessel: A water model case study

    NASA Astrophysics Data System (ADS)

    Nyoka, M.; Akdogan, G.; Eric, R. H.; Sutcliffe, N.

    2003-12-01

    The process of mixing and solid-liquid mass transfer in a one-fifth scale water model of a 100-ton Creusot-Loire Uddeholm (CLU) converter was investigated. The modified Froude number was used to relate gas flow rates between the model and its protoype. The influences of gas flow rate between 0.010 and 0.018 m3/s and bath height from 0.50 to 0.70 m on mixing time were examined. The results indicated that mixing time decreased with increasing gas flow rate and increased with increasing bath height. The mixing time results were evaluated in terms of specific energy input and the following correlation was proposed for estimating mixing times in the model CLU converter: T mix=1.08Q -1.05 W 0.35, where Q (m3/s) is the gas flow rate and W (tons) is the model bath weight. Solid-liquid mass-transfer rates from benzoic acid specimens immersed in the gas-agitated liquid phase were assessed by a weight loss measurement technique. The calculated mass-transfer coefficients were highest at the bath surface reaching a value of 6.40 × 10-5 m/s in the sprout region. Mass-transfer coefficients and turbulence parameters decreased with depth, reaching minimum values at the bottom of the vessel.

  8. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  9. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    NASA Astrophysics Data System (ADS)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  10. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  11. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  12. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  13. Advantages and pitfalls in the application of mixed-model association methods.

    PubMed

    Yang, Jian; Zaitlen, Noah A; Goddard, Michael E; Visscher, Peter M; Price, Alkes L

    2014-02-01

    Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.

  14. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Chen, Huangxin; Salama, Amgad; Sun, Shuyu

    2016-10-01

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  15. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  16. Continuous synthesis of drug-loaded nanoparticles using microchannel emulsification and numerical modeling: effect of passive mixing

    PubMed Central

    Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel

    2016-01-01

    By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification–evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates. PMID:27524896

  17. Continuous synthesis of drug-loaded nanoparticles using microchannel emulsification and numerical modeling: effect of passive mixing.

    PubMed

    Ortiz de Solorzano, Isabel; Uson, Laura; Larrea, Ane; Miana, Mario; Sebastian, Victor; Arruebo, Manuel

    2016-01-01

    By using interdigital microfluidic reactors, monodisperse poly(d,l lactic-co-glycolic acid) nanoparticles (NPs) can be produced in a continuous manner and at a large scale (~10 g/h). An optimized synthesis protocol was obtained by selecting the appropriated passive mixer and fluid flow conditions to produce monodisperse NPs. A reduced NP polydispersity was obtained when using the microfluidic platform compared with the one obtained with NPs produced in a conventional discontinuous batch reactor. Cyclosporin, an immunosuppressant drug, was used as a model to validate the efficiency of the microfluidic platform to produce drug-loaded monodisperse poly(d,l lactic-co-glycolic acid) NPs. The influence of the mixer geometries and temperatures were analyzed, and the experimental results were corroborated by using computational fluid dynamic three-dimensional simulations. Flow patterns, mixing times, and mixing efficiencies were calculated, and the model supported with experimental results. The progress of mixing in the interdigital mixer was quantified by using the volume fractions of the organic and aqueous phases used during the emulsification-evaporation process. The developed model and methods were applied to determine the required time for achieving a complete mixing in each microreactor at different fluid flow conditions, temperatures, and mixing rates.

  18. Gel properties and interactions of Mesona blumes polysaccharide-soy protein isolates mixed gel: The effect of salt addition.

    PubMed

    Wang, Wenjie; Shen, Mingyue; Liu, Suchen; Jiang, Lian; Song, Qianqian; Xie, Jianhua

    2018-07-15

    Effect of different salt ions on the gel properties and microstructure of Mesona blumes polysaccharide (MBP)-soy protein isolates (SPI) mixed gels were investigated. Sodium and calcium ions were chosen to explore their effects on the rheological behavior and gel properties of MBP-SPI mixed gels were evaluated by using rheological, X-ray diffraction, protein solubility determination, and microstructure analysis. Results showed that the addition of salt ions change the crystalline state of gels system, the crystal of gel was enhanced at low ion concentrations (0.005-0.01 M). The two peaks of gel characteristic at 8.9° and 19.9° almost disappeared at high salt ions concentrations (0.015-0.02 M), and new crystallization peaks appeared at around 30° and 45°. The elasticity, viscosity, gel strength, water holding capacity, and thermal stability of gel were increased at low ion concentration. Results showed that the main interactions which promoted gel formation and maintain the three-dimensional structure of the gel were electrostatic interactions, hydrophobic interactions, and disulfide interactions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not

  20. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  1. Using Mixed-Effects Structural Equation Models to Study Student Academic Development.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1992-01-01

    A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…

  2. Modelling the behaviour of additives in gun barrels

    NASA Astrophysics Data System (ADS)

    Rhodes, N.; Ludwig, J. C.

    1986-01-01

    A mathematical model which predicts the flow and heat transfer in a gun barrel is described. The model is transient, two-dimensional and equations are solved for velocities and enthalpies of a gas phase, which arises from the combustion of propellant and cartridge case, for particle additives which are released from the case; volume fractions of the gas and particles. Closure of the equations is obtained using a two-equation turbulence model. Preliminary calculations are described in which the proportions of particle additives in the cartridge case was altered. The model gives a good prediction of the ballistic performance and the gas to wall heat transfer. However, the expected magnitude of reduction in heat transfer when particles are present is not predicted. The predictions of gas flow invalidate some of the assumptions made regarding case and propellant behavior during combustion and further work is required to investigate these effects and other possible interactions, both chemical and physical, between gas and particles.

  3. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Making a mixed-model line more efficient and flexible by introducing a bypass line

    NASA Astrophysics Data System (ADS)

    Matsuura, Sho; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.

  5. Additive Mixing and Conformal Coating of Noniridescent Structural Colors with Robust Mechanical Properties Fabricated by Atomization Deposition.

    PubMed

    Li, Qingsong; Zhang, Yafeng; Shi, Lei; Qiu, Huihui; Zhang, Suming; Qi, Ning; Hu, Jianchen; Yuan, Wei; Zhang, Xiaohua; Zhang, Ke-Qin

    2018-04-24

    Artificial structural colors based on short-range-ordered amorphous photonic structures (APSs) have attracted great scientific and industrial interest in recent years. However, the previously reported methods of self-assembling colloidal nanoparticles lack fine control of the APS coating and fixation on substrates and poorly realize three-dimensional (3D) conformal coatings for objects with irregular or highly curved surfaces. In this paper, atomization deposition of silica colloidal nanoparticles with poly(vinyl alcohol) as the additive is proposed to solve the above problems. By finely controlling the thicknesses of APS coatings, additive mixing of noniridescent structural colors is easily realized. Based on the intrinsic omnidirectional feature of atomization, a one-step 3D homogeneous conformal coating is also readily realized on various irregular or highly curved surfaces, including papers, resins, metal plates, ceramics, and flexible silk fabrics. The vivid coatings on silk fabrics by atomization deposition possess robust mechanical properties, which are confirmed by rubbing and laundering tests, showing great potential in developing an environmentally friendly coloring technique in the textile industry.

  6. Computer modeling movement of biomass in the bioreactors with bubbling mixing

    NASA Astrophysics Data System (ADS)

    Kuschev, L. A.; Suslov, D. Yu; Alifanova, A. I.

    2017-01-01

    Recently in the Russian Federation there is an observation of the development of biogas technologies which are used in organic waste conversion of agricultural enterprises, consequently improving the ecological environment. To intensify the process and effective outstanding performance of the acquisition of biogas the application of systems of mixing of bubbling is used. In the case of bubbling mixing of biomass in the bioreactor two-phase portions consisting of biomass and bubbles of gas are formed. The bioreactor computer model with bubble pipeline has been made in a vertical spiral form forming a cone type turned upside down. With the help of computing program of OpenFVM-Flow, an evaluation experiment was conducted to determine the key technological parameters of process of bubbling mixing and to get a visual picture of biomass flows distribution in the bioreactor. For the experimental bioreactor the following equation of V=190 l, speed level, the biomass circulation, and the time of a single cycle of uax =0,029 m/s; QC =0,00087 m3/s, Δtbm .=159 s. In future, we plan to conduct a series of theoretical and experimental researches into the mixing frequency influence on the biogas acquisition process effectiveness.

  7. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    EPA Science Inventory

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  8. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  9. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  10. Mixed reality temporal bone surgical dissector: mechanical design

    PubMed Central

    2014-01-01

    Objective The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Method Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Results Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill’s passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. Conclusion These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone

  11. Mixed reality temporal bone surgical dissector: mechanical design.

    PubMed

    Hochman, Jordan Brent; Sepehri, Nariman; Rampersad, Vivek; Kraut, Jay; Khazraee, Milad; Pisa, Justyn; Unger, Bertram

    2014-08-08

    The Development of a Novel Mixed Reality (MR) Simulation. An evolving training environment emphasizes the importance of simulation. Current haptic temporal bone simulators have difficulty representing realistic contact forces and while 3D printed models convincingly represent vibrational properties of bone, they cannot reproduce soft tissue. This paper introduces a mixed reality model, where the effective elements of both simulations are combined; haptic rendering of soft tissue directly interacts with a printed bone model. This paper addresses one aspect in a series of challenges, specifically the mechanical merger of a haptic device with an otic drill. This further necessitates gravity cancelation of the work assembly gripper mechanism. In this system, the haptic end-effector is replaced by a high-speed drill and the virtual contact forces need to be repositioned to the drill tip from the mid wand. Previous publications detail generation of both the requisite printed and haptic simulations. Custom software was developed to reposition the haptic interaction point to the drill tip. A custom fitting, to hold the otic drill, was developed and its weight was offset using the haptic device. The robustness of the system to disturbances and its stable performance during drilling were tested. The experiments were performed on a mixed reality model consisting of two drillable rapid-prototyped layers separated by a free-space. Within the free-space, a linear virtual force model is applied to simulate drill contact with soft tissue. Testing illustrated the effectiveness of gravity cancellation. Additionally, the system exhibited excellent performance given random inputs and during the drill's passage between real and virtual components of the model. No issues with registration at model boundaries were encountered. These tests provide a proof of concept for the initial stages in the development of a novel mixed-reality temporal bone simulator.

  12. Statistical quality assessment criteria for a linear mixing model with elliptical t-distribution errors

    NASA Astrophysics Data System (ADS)

    Manolakis, Dimitris G.

    2004-10-01

    The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.

  13. Additive-dominance genetic model analyses for late-maturity alpha-amylase activity in a bread wheat factorial crossing population.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H

    2015-12-01

    Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.

  14. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  15. Bias and uncertainty of δ13CO2 isotopic mixing models

    Treesearch

    Zachary E. Kayler; Lisa Ganio; Mark Hauck; Thomas G. Pypker; Elizabeth W. Sulzman; Alan C. Mix; Barbara J. Bond

    2009-01-01

    The goal of this study was to evaluate how factorial combinations of two mixing models and two regression approaches (Keeling-OLS, Miller—Tans-OLS, Keeling-GMR, Miller—Tans-GMR) compare in small [CO2] range versus large[CO2] range regimes, with different combinations of...

  16. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  17. Improving the mixing performance of side channel type micromixers using an optimal voltage control model.

    PubMed

    Wu, Chien-Hsien; Yang, Ruey-Jen

    2006-06-01

    Electroosmotic flow in microchannels is restricted to low Reynolds number regimes. Since the inertia forces are extremely weak in such regimes, turbulent conditions do not readily develop, and hence species mixing occurs primarily as a result of diffusion. Consequently, achieving a thorough species mixing generally relies upon the use of extended mixing channels. This paper aims to improve the mixing performance of conventional side channel type micromixers by specifying the optimal driving voltages to be applied to each channel. In the proposed approach, the driving voltages are identified by constructing a simple theoretical scheme based on a 'flow-rate-ratio' model and Kirchhoff's law. The numerical and experimental results confirm that the optimal voltage control approach provides a better mixing performance than the use of a single driving voltage gradient.

  18. Geochemical modeling of magma mixing and magma reservoir volumes during early episodes of Kīlauea Volcano's Pu`u `Ō`ō eruption

    NASA Astrophysics Data System (ADS)

    Shamberger, Patrick J.; Garcia, Michael O.

    2007-02-01

    Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.

  19. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  20. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  1. Rationalizing the light-induced phase separation of mixed halide organic-inorganic perovskites.

    PubMed

    Draguta, Sergiu; Sharia, Onise; Yoon, Seog Joon; Brennan, Michael C; Morozov, Yurii V; Manser, Joseph S; Kamat, Prashant V; Schneider, William F; Kuno, Masaru

    2017-08-04

    Mixed halide hybrid perovskites, CH 3 NH 3 Pb(I 1-x Br x ) 3 , represent good candidates for low-cost, high efficiency photovoltaic, and light-emitting devices. Their band gaps can be tuned from 1.6 to 2.3 eV, by changing the halide anion identity. Unfortunately, mixed halide perovskites undergo phase separation under illumination. This leads to iodide- and bromide-rich domains along with corresponding changes to the material's optical/electrical response. Here, using combined spectroscopic measurements and theoretical modeling, we quantitatively rationalize all microscopic processes that occur during phase separation. Our model suggests that the driving force behind phase separation is the bandgap reduction of iodide-rich phases. It additionally explains observed non-linear intensity dependencies, as well as self-limited growth of iodide-rich domains. Most importantly, our model reveals that mixed halide perovskites can be stabilized against phase separation by deliberately engineering carrier diffusion lengths and injected carrier densities.Mixed halide hybrid perovskites possess tunable band gaps, however, under illumination they undergo phase separation. Using spectroscopic measurements and theoretical modelling, Draguta and Sharia et al. quantitatively rationalize the microscopic processes that occur during phase separation.

  2. GUT and flavor models for neutrino masses and mixing

    NASA Astrophysics Data System (ADS)

    Meloni, Davide

    2017-10-01

    In the recent years experiments have established the existence of neutrino oscillations and most of the oscillation parameters have been measured with a good accuracy. However, in spite of many interesting ideas, no real illumination was sparked on the problem of flavor in the lepton sector. In this review, we discuss the state of the art of models for neutrino masses and mixings formulated in the context of flavor symmetries, with particular emphasis on the role played by grand unified gauge groups.

  3. Groundwater contamination from an inactive uranium mill tailings pile. 2. Application of a dynamic mixing model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narashimhan, T.N.; White, A.F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. Inmore » the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.« less

  4. Groundwater contamination from an inactive uranium mill tailings pile: 2. Application of a dynamic mixing model

    NASA Astrophysics Data System (ADS)

    Narasimhan, T. N.; White, A. F.; Tokunaga, T.

    1986-12-01

    At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series [White et al., 1984] we presented field data as well as an interpretation based on a static mixing model. As an upper bound, we estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work we present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNAmic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.

  5. Neutrino mixing in a left-right model

    NASA Astrophysics Data System (ADS)

    Martins Simões, J. A.; Ponciano, J. A.

    We study the mixing among different generations of massive neutrino fields in a model can accommodate a consistent pattern for neutral fermion masses as well as neutrino oscillations. The left and right sectors can be connected by a new neutral current. PACS: 12.60.-i, 14.60.St, 14.60.Pq

  6. Mechanisms and modeling of the effects of additives on the nitrogen oxides emission

    NASA Technical Reports Server (NTRS)

    Kundu, Krishna P.; Nguyen, Hung Lee; Kang, M. Paul

    1991-01-01

    A theoretical study on the emission of the oxides of nitrogen in the combustion of hydrocarbons is presented. The current understanding of the mechanisms and the rate parameters for gas phase reactions were used to calculate the NO(x) emission. The possible effects of different chemical species on thermal NO(x), on a long time scale were discussed. The mixing of these additives at various stages of combustion were considered and NO(x) concentrations were calculated; effects of temperatures were also considered. The chemicals such as hydrocarbons, H2, CH3OH, NH3, and other nitrogen species were chosen as additives in this discussion. Results of these calculations can be used to evaluate the effects of these additives on the NO(x) emission in the industrial combustion system.

  7. A brief introduction to mixed effects modelling and multi-model inference in ecology

    PubMed Central

    Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961

  8. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    PubMed

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  9. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  10. An electrical circuit model for additive-modified SnO2 ceramics

    NASA Astrophysics Data System (ADS)

    Karami Horastani, Zahra; Alaei, Reza; Karami, Amirhossein

    2018-05-01

    In this paper an electrical circuit model for additive-modified metal oxide ceramics based on their physical structures and electrical resistivities is presented. The model predicts resistance of the sample at different additive concentrations and different temperatures. To evaluate the model two types of composite ceramics, SWCNT/SnO2 with SWCNT concentrations of 0.3, 0.6, 1.2, 2.4 and 3.8%wt, and Ag/SnO2 with Ag concentrations of 0.3, 0.5, 0.8 and 1.5%wt, were prepared and their electrical resistances versus temperature were experimentally measured. It is shown that the experimental data are in good agreement with the results obtained from the model. The proposed model can be used in the design process of ceramic-based gas sensors, and it also clarifies the role of additive in gas sensing process of additive-modified metal oxide gas sensors. Furthermore the model can be used in the system level modeling of designs in which these sensors are also present.

  11. An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin

    2015-11-01

    Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.

  12. Biodegradation of diesel by mixed bacteria immobilized onto a hybrid support of peat moss and additives: a batch experiment.

    PubMed

    Lee, Young-Chul; Shin, Hyun-Jae; Ahn, Yeonghee; Shin, Min-Chul; Lee, Myungjin; Yang, Ji-Won

    2010-11-15

    We report microbial cell immobilization onto a hybrid support of peat moss for diesel biodegradation. Three strains isolated from a site contaminated with diesel oil were used in this study: Acinetobacter sp., Gordonia sp., and Rhodococcus sp. To increase not only diesel adsorption but also diesel biodegradation, additives such as zeolite, bentonite, chitosan, and alginate were tested. In this study, a peat moss, bentonite, and alginate (2/2.9/0.1 g, w/w/w) hybrid support (PBA) was the best support matrix, considering both diesel physical adsorption capacity and mixed microbial immobilization. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.

    2003-08-01

    We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less

  14. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  15. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  16. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Breast Radiotherapy with Mixed Energy Photons; a Model for Optimal Beam Weighting.

    PubMed

    Birgani, Mohammadjavad Tahmasebi; Fatahiasl, Jafar; Hosseini, Seyed Mohammad; Bagheri, Ali; Behrooz, Mohammad Ali; Zabiehzadeh, Mansour; Meskani, Reza; Gomari, Maryam Talaei

    2015-01-01

    Utilization of high energy photons (>10 MV) with an optimal weight using a mixed energy technique is a practical way to generate a homogenous dose distribution while maintaining adequate target coverage in intact breast radiotherapy. This study represents a model for estimation of this optimal weight for day to day clinical usage. For this purpose, treatment planning computed tomography scans of thirty-three consecutive early stage breast cancer patients following breast conservation surgery were analyzed. After delineation of the breast clinical target volume (CTV) and placing opposed wedge paired isocenteric tangential portals, dosimeteric calculations were conducted and dose volume histograms (DVHs) were generated, first with pure 6 MV photons and then these calculations were repeated ten times with incorporating 18 MV photons (ten percent increase in weight per step) in each individual patient. For each calculation two indexes including maximum dose in the breast CTV (Dmax) and the volume of CTV which covered with 95% Isodose line (VCTV, 95%IDL) were measured according to the DVH data and then normalized values were plotted in a graph. The optimal weight of 18 MV photons was defined as the intersection point of Dmax and VCTV, 95%IDL graphs. For creating a model to predict this optimal weight multiple linear regression analysis was used based on some of the breast and tangential field parameters. The best fitting model for prediction of 18 MV photons optimal weight in breast radiotherapy using mixed energy technique, incorporated chest wall separation plus central lung distance (Adjusted R2=0.776). In conclusion, this study represents a model for the estimation of optimal beam weighting in breast radiotherapy using mixed photon energy technique for routine day to day clinical usage.

  18. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    PubMed

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  19. Study of a mixed dispersal population dynamics model

    DOE PAGES

    Chugunova, Marina; Jadamba, Baasansuren; Kao, Chiu -Yen; ...

    2016-08-27

    In this study, we consider a mixed dispersal model with periodic and Dirichlet boundary conditions and its corresponding linear eigenvalue problem. This model describes the time evolution of a population which disperses both locally and non-locally. We investigate how long time dynamics depend on the parameter values. Furthermore, we study the minimization of the principal eigenvalue under the constraints that the resource function is bounded from above and below, and with a fixed total integral. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for the species to diemore » out more slowly or survive more easily. Our numerical simulations indicate that the optimal favorable region tends to be a simply-connected domain. Numerous results are shown to demonstrate various scenarios of optimal favorable regions for periodic and Dirichlet boundary conditions.« less

  20. Effect of fibre additions to flatbread flour mixes on glucose kinetics: a randomised controlled trial.

    PubMed

    Boers, Hanny M; van Dijk, Theo H; Hiemstra, Harry; Hoogenraad, Anne-Roos; Mela, David J; Peters, Harry P F; Vonk, Roel J; Priebe, Marion G

    2017-11-01

    We previously found that guar gum (GG) and chickpea flour (CPF) added to flatbread wheat flour lowered postprandial blood glucose (PPG) and insulin responses dose dependently. However, rates of glucose influx cannot be determined from PPG, which integrates rates of influx, tissue disposal and hepatic glucose production. The objective was to quantify rates of glucose influx and related fluxes as contributors to changes in PPG with GG and CPF additions to wheat-based flatbreads. In a randomised cross-over design, twelve healthy males consumed each of three different 13C-enriched meals: control flatbreads (C), or C incorporating 15 % CPF with either 2 % (GG2) or 4 % (GG4) GG. A dual isotope technique was used to determine the time to reach 50 % absorption of exogenous glucose (T 50 %abs, primary objective), rate of appearance of exogenous glucose (RaE), rate of appearance of total glucose (RaT), endogenous glucose production (EGP) and rate of disappearance of total glucose (RdT). Additional exploratory outcomes included PPG, insulin, glucose-dependent insulinotropic peptide and glucagon-like peptide 1, which were additionally measured over 4 h. Compared with C, GG2 and GG4 had no significant effect on T 50 %abs. However, GG4 significantly reduced 4-h AUC values for RaE, RaT, RdT and EGP, by 11, 14, 14 and 64 %, respectively, whereas GG2 showed minor effects. Effect sizes over 2 and 4 h were similar except for significantly greater reduction in EGP for GG4 at 2 h. In conclusion, a soluble fibre mix added to flatbreads only slightly reduced rates of glucose influx, but more substantially affected rates of postprandial disposal and hepatic glucose production.

  1. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  2. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  3. How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models.

    PubMed

    Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo

    2017-01-01

    Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.

  4. 12 CFR 268.302 - Mixed case complaints.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Mixed case complaints. 268.302 Section 268.302... (CONTINUED) RULES REGARDING EQUAL OPPORTUNITY Related Processes § 268.302 Mixed case complaints. A mixed case... discrimination or it may contain additional allegations that the MSPB has jurisdiction to address. A mixed case...

  5. 12 CFR 268.302 - Mixed case complaints.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 4 2013-01-01 2013-01-01 false Mixed case complaints. 268.302 Section 268.302... (CONTINUED) RULES REGARDING EQUAL OPPORTUNITY Related Processes § 268.302 Mixed case complaints. A mixed case... discrimination or it may contain additional allegations that the MSPB has jurisdiction to address. A mixed case...

  6. 12 CFR 268.302 - Mixed case complaints.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 4 2012-01-01 2012-01-01 false Mixed case complaints. 268.302 Section 268.302... (CONTINUED) RULES REGARDING EQUAL OPPORTUNITY Related Processes § 268.302 Mixed case complaints. A mixed case... discrimination or it may contain additional allegations that the MSPB has jurisdiction to address. A mixed case...

  7. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    PubMed

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  8. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    PubMed Central

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  9. Investigation of warm-mix asphalt for Iowa roadways.

    DOT National Transportation Integrated Search

    2013-09-01

    Phase II of this study further evaluated the performance of plant-produced warm-mix asphalt (WMA) mixes by conducting : additional mixture performance tests at a broader range of temperatures, adding additional pavements to the study, comparing : vir...

  10. ACADEMY OF SCIENCES AZERBAYDZHAN. INSTITUTE OF ADDITIVE CHEMISTRY. ADDITIVES AND LUBRICANTS, QUESTIONS OF SYNTHESIS, RESEARCH ON THE APPLICATION OF ADDITIVES AND LUBRICANTS, FUELS, AND POLYMER MATERIALS (SELECTED ARTICLES),

    DTIC Science & Technology

    an alkylphenol ); Synthesis and investigation of the new antioxidative INKhP-40 Additive; Synthesis and investigation of N-butylurethane-based antioxidative additive; and Synthesis of mixed esters of dithiophosphoric acid.

  11. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  12. MIXING STUDY FOR JT-71/72 TANKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.

    2013-11-26

    All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less

  13. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the

  14. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  15. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  16. Effective temperatures of red giants in the APOKASC catalogue and the mixing length calibration in stellar models

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.

    2018-04-01

    Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.

  17. Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Jerden, James L.; Ebert, William L.

    The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM

  18. Understanding and Improving Ocean Mixing Parameterizations for modeling Climate Change

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Fells, J.; Clarke, J.; Cheng, Y.; Canuto, V.; Dubovikov, M. S.

    2017-12-01

    Climate is vital. Earth is only habitable due to the atmosphere&oceans' distribution of energy. Our Greenhouse Gas emissions shift overall the balance between absorbed and emitted radiation causing Global Warming. How much of these emissions are stored in the ocean vs. entering the atmosphere to cause warming and how the extra heat is distributed depends on atmosphere&ocean dynamics, which we must understand to know risks of both progressive Climate Change and Climate Variability which affect us all in many ways including extreme weather, floods, droughts, sea-level rise and ecosystem disruption. Citizens must be informed to make decisions such as "business as usual" vs. mitigating emissions to avert catastrophe. Simulations of Climate Change provide needed knowledge but in turn need reliable parameterizations of key physical processes, including ocean mixing, which greatly impacts transport&storage of heat and dissolved CO2. The turbulence group at NASA-GISS seeks to use physical theory to improve parameterizations of ocean mixing, including smallscale convective, shear driven, double diffusive, internal wave and tidal driven vertical mixing, as well as mixing by submesoscale eddies, and lateral mixing along isopycnals by mesoscale eddies. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. We write our own programs in MATLAB and FORTRAN to visualize and process output of ocean simulations including producing statistics to help judge impacts of different parameterizations on fidelity in reproducing realistic temperatures&salinities, diffusivities and turbulent power. The results can help upgrade the parameterizations. Students are introduced to complex system modeling and gain deeper appreciation of climate science and programming skills, while furthering climate science. We are incorporating climate projects into the Medgar Evers college curriculum. The PI is both a member of the turbulence group at

  19. Developing approaches for linear mixed modeling in landscape genetics through landscape-directed dispersal simulations

    USGS Publications Warehouse

    Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.

    2017-01-01

    Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.

  20. Experimental and mathematical model of the interactions in the mixed culture of links in the "producer-consumer" cycle

    NASA Astrophysics Data System (ADS)

    Pisman, T. I.; Galayda, Ya. V.

    The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis